Requiring both avoidance and emotional numbing in DSM-V PTSD: will it help?
Forbes, David; Fletcher, Susan; Lockwood, Emma; O'Donnell, Meaghan; Creamer, Mark; Bryant, Richard A; McFarlane, Alexander; Silove, Derrick
2011-05-01
The proposed DSM-V criteria for posttraumatic stress disorder (PTSD) specifically require both active avoidance and emotional numbing symptoms for a diagnosis. In DSM-IV, since both are included in the same cluster, active avoidance is not essential. Numbing symptoms overlap with depression, which may result in spurious comorbidity or overdiagnosis of PTSD. This paper investigated the impact of requiring both active avoidance and emotional numbing on the rates of PTSD diagnosis and comorbidity with depression. We investigated PTSD and depression in 835 traumatic injury survivors at 3 and 12 months post-injury. We used the DSM-IV criteria but explored the potential impact of DSM-IV and DSM-V approaches to avoidance and numbing using comparison of proportion analyses. The DSM-V requirement of both active avoidance and emotional numbing resulted in significant reductions in PTSD caseness compared with DSM-IV of 22% and 26% respectively at 3 and 12 months posttrauma. By 12 months, the rates of comorbid PTSD in those with depression were significantly lower (44% vs. 34%) using the new criteria, primarily due to the lack of avoidance symptoms. These preliminary data suggest that requiring both active avoidance and numbing as separate clusters offers a useful refinement of the PTSD diagnosis. Requiring active avoidance may help to define the unique aspects of PTSD and reduce spurious diagnoses of PTSD in those with depression. Copyright © 2010. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mansfield, C. D.; Rutt, H. N.
2002-02-01
The possible generation of spurious results, arising from the application of infrared spectroscopic techniques to the measurement of carbon isotope ratios in breath, due to coincident absorption bands has been re-examined. An earlier investigation, which approached the problem qualitatively, fulfilled its aspirations in providing an unambiguous assurance that 13C16O2/12C16O2 ratios can be confidently measured for isotopic breath tests using instruments based on infrared absorption. Although this conclusion still stands, subsequent quantitative investigation has revealed an important exception that necessitates a strict adherence to sample collection protocol. The results show that concentrations and decay rates of the coincident breath trace compounds acetonitrile and carbon monoxide, found in the breath sample of a heavy smoker, can produce spurious results. Hence, findings from this investigation justify the concern that breath trace compounds present a risk to the accurate measurement of carbon isotope ratios in breath when using broadband, non-dispersive, ground state absorption infrared spectroscopy. It provides recommendations on the length of smoking abstention required to avoid generation of spurious results and also reaffirms, through quantitative argument, the validity of using infrared absorption spectroscopy to measure CO2 isotope ratios in breath.
A multidisciplinary approach to reducing spurious hyperkalemia in hospital outpatient clinics.
Loh, Tze Ping; Sethi, Sunil K
2015-10-01
To describe a multidisciplinary effort to investigate and reduce the occurence of outpatient spurious hyperkalaemia. Spurious hyperkalemia is a falsely elevated serum potassium result that does not reflect the in vivo condition of a person. A common practice of fist clenching/pumping during phlebotomy to improve vein visualisation is an under-appreciated cause of spurious hyperkalemia. Pre- and postinterventional study. Objective evidence of spurious hyperkalaemia was sought by reviewing archived laboratory results. A literature review was undertaken to summarise known causes of spurious hyperkalaemia and develop a best practice in phlebotomy. Subsequently, nurses from the Urology Clinic were interviewed, observed and surveyed to understand their phlebotomy workflow and identify potential areas of improvement by comparing to the best practice in phlebotomy. Unexplained (potentially spurious) hyperkalaemia was defined as a serum potassium of >5·0 mmol/l in a patient without stage 5 chronic kidney disease or haemolysed blood sample. Nurses from the Urology Clinic showed significant knowledge gap regarding causes of spurious hyperkalaemia when compared to the literature review. Direct observation revealed patients were routinely asked to clench their fists, which may cause spurious hyperkalaemia. Following these observations, several educational initiatives were administered to address the knowledge gap and stop fist clenching. The rate of unexplained hyperkalaemia at the Urology clinic reduced from a baseline of 16·0-3·8%, 58 weeks after intervention. Similar education intervention was propagated to all 18 other specialist outpatient clinic locations, which saw their rate of unexplained hyperkalaemia decrease from 5·4 to 3·7%. To ensure sustainability of the improvements, the existing phlebotomy standard operating protocol, educational and competency testing materials at variance with the best practice were revised. A simple intervention of avoiding fist clenching/pumping during phlebotomy produced significant reduction in the rate of spurious hyperkalemia. © 2015 John Wiley & Sons Ltd.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Kasten, Florian H; Negahbani, Ehsan; Fröhlich, Flavio; Herrmann, Christoph S
2018-05-31
Amplitude modulated transcranial alternating current stimulation (AM-tACS) has been recently proposed as a possible solution to overcome the pronounced stimulation artifact encountered when recording brain activity during tACS. In theory, AM-tACS does not entail power at its modulating frequency, thus avoiding the problem of spectral overlap between brain signal of interest and stimulation artifact. However, the current study demonstrates how weak non-linear transfer characteristics inherent to stimulation and recording hardware can reintroduce spurious artifacts at the modulation frequency. The input-output transfer functions (TFs) of different stimulation setups were measured. Setups included recordings of signal-generator and stimulator outputs and M/EEG phantom measurements. 6 th -degree polynomial regression models were fitted to model the input-output TFs of each setup. The resulting TF models were applied to digitally generated AM-tACS signals to predict the frequency of spurious artifacts in the spectrum. All four setups measured for the study exhibited low-frequency artifacts at the modulation frequency and its harmonics when recording AM-tACS. Fitted TF models showed non-linear contributions significantly different from zero (all p < .05) and successfully predicted the frequency of artifacts observed in AM-signal recordings. Results suggest that even weak non-linearities of stimulation and recording hardware can lead to spurious artifacts at the modulation frequency and its harmonics. These artifacts were substantially larger than alpha-oscillations of a human subject in the MEG. Findings emphasize the need for more linear stimulation devices for AM-tACS and careful analysis procedures, taking into account low-frequency artifacts to avoid confusion with effects of AM-tACS on the brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Battistoni, Andrea; Bencivenga, Filippo; Fioretto, Daniele; Masciovecchio, Claudio
2014-10-15
In this Letter, we present a simple method to avoid the well-known spurious contributions in the Brillouin light scattering (BLS) spectrum arising from the finite aperture of collection optics. The method relies on the use of special spatial filters able to select the scattered light with arbitrary precision around a given value of the momentum transfer (Q). We demonstrate the effectiveness of such filters by analyzing the BLS spectra of a reference sample as a function of scattering angle. This practical and inexpensive method could be an extremely useful tool to fully exploit the potentiality of Brillouin acoustic spectroscopy, as it will easily allow for effective Q-variable experiments with unparalleled luminosity and resolution.
Spurious cross-frequency amplitude-amplitude coupling in nonstationary, nonlinear signals
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Lo, Men-Tzung; Hu, Kun
2016-07-01
Recent studies of brain activities show that cross-frequency coupling (CFC) plays an important role in memory and learning. Many measures have been proposed to investigate the CFC phenomenon, including the correlation between the amplitude envelopes of two brain waves at different frequencies - cross-frequency amplitude-amplitude coupling (AAC). In this short communication, we describe how nonstationary, nonlinear oscillatory signals may produce spurious cross-frequency AAC. Utilizing the empirical mode decomposition, we also propose a new method for assessment of AAC that can potentially reduce the effects of nonlinearity and nonstationarity and, thus, help to avoid the detection of artificial AACs. We compare the performances of this new method and the traditional Fourier-based AAC method. We also discuss the strategies to identify potential spurious AACs.
Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations
NASA Technical Reports Server (NTRS)
Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.
1995-01-01
Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.
Portable Integrated Wireless Device Threat Assessment to Aircraft Radio Systems
NASA Technical Reports Server (NTRS)
Salud, Maria Theresa P.; Williams, Reuben A. (Technical Monitor)
2004-01-01
An assessment was conducted on multiple wireless local area network (WLAN) devices using the three wireless standards for spurious radiated emissions to determine their threat to aircraft radio navigation systems. The measurement process, data and analysis are provided for devices tested using IEEE 802.11a, IEEE 802.11b, and Bluetooth as well as data from portable laptops/tablet PCs and PDAs (grouping known as PEDs). A comparison was made between wireless LAN devices and portable electronic devices. Spurious radiated emissions were investigated in the radio frequency bands for the following aircraft systems: Instrument Landing System Localizer and Glideslope, Very High Frequency (VHF) Communication, VHF Omnidirectional Range, Traffic Collision Avoidance System, Air Traffic Control Radar Beacon System, Microwave Landing System and Global Positioning System. Since several of the contiguous navigation systems were grouped under one encompassing measurement frequency band, there were five measurement frequency bands where spurious radiated emissions data were collected for the PEDs and WLAN devices. The report also provides a comparison between emissions data and regulatory emission limit.
High-Temperature Hall-Effect Apparatus
NASA Technical Reports Server (NTRS)
Wood, C.; Lockwood, R. A.; Chemielewski, A. B.; Parker, J. B.; Zoltan, A.
1985-01-01
Compact furnace minimizes thermal gradients and electrical noise. Semiautomatic Hall-effect apparatus takes measurements on refractory semiconductors at temperatures as high as 1,100 degrees C. Intended especially for use with samples of high conductivity and low chargecarrier mobility that exhibit low signal-to-noise ratios, apparatus carefully constructed to avoid spurious electromagnetic and thermoelectric effects that further degrade measurements.
Spurious symptom reduction in fault monitoring
NASA Technical Reports Server (NTRS)
Shontz, William D.; Records, Roger M.; Choi, Jai J.
1993-01-01
Previous work accomplished on NASA's Faultfinder concept suggested that the concept was jeopardized by spurious symptoms generated in the monitoring phase. The purpose of the present research was to investigate methods of reducing the generation of spurious symptoms during in-flight engine monitoring. Two approaches for reducing spurious symptoms were investigated. A knowledge base of rules was constructed to filter known spurious symptoms and a neural net was developed to improve the expectation values used in the monitoring process. Both approaches were effective in reducing spurious symptoms individually. However, the best results were obtained using a hybrid system combining the neural net capability to improve expectation values with the rule-based logic filter.
The Osher scheme for non-equilibrium reacting flows
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.
Communication: An accurate global potential energy surface for the ground electronic state of ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawes, Richard, E-mail: dawesr@mst.edu, E-mail: hguo@unm.edu; Lolur, Phalgun; Li, Anyang
We report a new full-dimensional and global potential energy surface (PES) for the O + O{sub 2} → O{sub 3} ozone forming reaction based on explicitly correlated multireference configuration interaction (MRCI-F12) data. It extends our previous [R. Dawes, P. Lolur, J. Ma, and H. Guo, J. Chem. Phys. 135, 081102 (2011)] dynamically weighted multistate MRCI calculations of the asymptotic region which showed the widely found submerged reef along the minimum energy path to be the spurious result of an avoided crossing with an excited state. A spin-orbit correction was added and the PES tends asymptotically to the recently developed long-rangemore » electrostatic model of Lepers et al. [J. Chem. Phys. 137, 234305 (2012)]. This PES features: (1) excellent equilibrium structural parameters, (2) good agreement with experimental vibrational levels, (3) accurate dissociation energy, and (4) most-notably, a transition region without a spurious reef. The new PES is expected to allow insight into the still unresolved issues surrounding the kinetics, dynamics, and isotope signature of ozone.« less
The behavior of quantization spectra as a function of signal-to-noise ratio
NASA Technical Reports Server (NTRS)
Flanagan, M. J.
1991-01-01
An expression for the spectrum of quantization error in a discrete-time system whose input is a sinusoid plus white Gaussian noise is derived. This quantization spectrum consists of two components: a white-noise floor and spurious harmonics. The dithering effect of the input Gaussian noise in both components of the spectrum is considered. Quantitative results in a discrete Fourier transform (DFT) example show the behavior of spurious harmonics as a function of the signal-to-noise ratio (SNR). These results have strong implications for digital reception and signal analysis systems. At low SNRs, spurious harmonics decay exponentially on a log-log scale, and the resulting spectrum is white. As the SNR increases, the spurious harmonics figure prominently in the output spectrum. A useful expression is given that roughly bounds the magnitude of a spurious harmonic as a function of the SNR.
Radio Frequency Compatibility of an RFID Tag on Glideslope Navigation Receivers
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Mielnik, John J.
2008-01-01
A process is demonstrated to show compatibility between a radio frequency identification (RFID) tag and an aircraft glideslope (GS) radio receiver. The particular tag chosen was previously shown to have significant peak spurious emission levels that far exceeded the emission limits in the GS aeronautical band. The spurious emissions are emulated in the study by capturing the RFID fundamental transmission and playing back the signal in the GS band. The signal capturing and playback are achieved with a vector signal generator and a spectrum analyzer that can output the in-phase and quadrature components (IQ). The simulated interference signal is combined with a desired GS signal before being injected into a GS receiver s antenna port for interference threshold determination. Minimum desired propagation loss values to avoid interference are then computed and compared against actual propagation losses for several aircraft.
Mortier, Virginie; Vancoillie, Leen; Dauwe, Kenny; Staelens, Delfien; Demecheleer, Els; Schauvliege, Marlies; Dinakis, Sylvie; Van Maerken, Tom; Dessilly, Géraldine; Ruelle, Jean; Verhofstede, Chris
2017-10-24
Pre-analytical sample processing is often overlooked as a potential cause of inaccurate assay results. Here we demonstrate how plasma, extracted from standard EDTA-containing blood collection tubes, may contain traces of blood cells consequently resulting in a false low-level HIV-1 viral load when using Roche Cobas HIV-1 assays. The presence of human DNA in Roche Cobas 4800 RNA extracts and in RNA extracts from the Abbott HIV-1 RealTime assay was assessed by quantifying the human albumin gene by means of quantitative PCR. RNA was extracted from plasma samples before and after an additional centrifugation and tested for viral load and DNA contamination. The relation between total DNA content and viral load was defined. Elevated concentrations of genomic DNA were detected in 28 out of 100 Cobas 4800 extracts and were significantly more frequent in samples processed outside of the AIDS Reference Laboratory. An association between genomic DNA presence and spurious low-level viraemia results was demonstrated. Supplementary centrifugation of plasma before RNA extraction eliminated the contamination and the false viraemia. Plasma isolated from standard EDTA-containing blood collection tubes may contain traces of HIV DNA leading to false viral load results above the clinical cutoff. Supplementary centrifugation of plasma before viral load analysis may eliminate the occurrence of this spurious low-level viraemia.
Patel, Harilal; Patel, Prakash; Modi, Nirav; Shah, Shaival; Ghoghari, Ashok; Variya, Bhavesh; Laddha, Ritu; Baradia, Dipesh; Dobaria, Nitin; Mehta, Pavak; Srinivas, Nuggehally R
2017-08-30
Because of the avoidance of first pass metabolic effects due to direct and rapid absorption with improved permeability, intranasal route represents a good alternative for extravascular drug administration. The aim of the study was to investigate the intranasal pharmacokinetics of two anti-migraine drugs (zolmitriptan and eletriptan), using retro-orbital sinus and jugular vein sites sampling. In a parallel study design, healthy male Sprague-Dawley (SD) rats aged between 8 and 12weeks were divided into groups (n=4 or 5/group). The animals of individual groups were dosed intranasal (~1.0mg/kg) and oral doses of 2.1mg/kg of either zolmitriptan or eletriptan. Serial blood sampling was performed from jugular vein or retro-orbital site and plasma samples were analyzed for drug concentrations using LC-MS/MS assay. Standard pharmacokinetics parameters such as T max , C max , AUC last , AUC 0-inf and T 1/2 were calculated and statistics of derived parameters was performed using unpaired t-test. After intranasal dosing, the mean pharmacokinetic parameters C max and AUC inf of zolmitriptan/eletriptan showed about 17-fold and 3-5-fold higher values for retro-orbital sampling as compared to the jugular vein sampling site. Whereas after oral administration such parameters derived for both drugs were largely comparable between the two sampling sites and statistically non-significant. In conclusion, the assessment of plasma levels after intranasal administration with retro-orbital sampling would result in spurious and misleading pharmacokinetics. Copyright © 2017 Elsevier B.V. All rights reserved.
Avoiding pitfalls in estimating heritability with the common options approach
Danchin, Etienne; Wajnberg, Eric; Wagner, Richard H.
2014-01-01
In many circumstances, heritability estimates are subject to two potentially interacting pitfalls: the spatial and the regression to the mean (RTM) fallacies. The spatial fallacy occurs when the set of potential movement options differs among individuals according to where individuals depart. The RTM fallacy occurs when extreme measurements are followed by measurements that are closer to the mean. We simulated data from the largest published heritability study of a behavioural trait, colony size choice, to examine the operation of the two fallacies. We found that spurious heritabilities are generated under a wide range of conditions both in experimental and correlative estimates of heritability. Classically designed cross-foster experiments can actually increase the frequency of spurious heritabilities. Simulations showed that experiments providing all individuals with the identical set of options, such as by fostering all offspring in the same breeding location, are immune to the two pitfalls. PMID:24865284
Radio Frequency Compatibility of an RFID Tag on Glideslope Navigation Receivers
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Mielnik, John J.
2008-01-01
A process is demonstrated to show compatibility between a radio frequency identification (RFID) tag and an aircraft glideslope (GS) radio r eceiver. The particular tag chosen was previously shown to have significant spurious emission levels that exceeded the emission limit in th e GS aeronautical band. The spurious emissions are emulated in the study by capturing the RFID fundamental transmission and playing back th e signal in the GS band. The signal capturing and playback are achiev ed with a vector signal generator and a spectrum analyzer that can output the in-phase and quadrature components (IQ). The simulated interf erence signal is combined with a GS signal before being injected into a GS receiver#s antenna port for interference threshold determination . Minimum desired propagation loss values to avoid interference are then computed and compared against actual propagation losses for severa l aircraft.
Computational Aeroacoustics: An Overview
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
2003-01-01
An overview of recent advances in computational aeroacoustics (CAA) is presented. CAA algorithms must not be dispersive and dissipative. It should propagate waves supported by the Euler equations with the correct group velocities. Computation domains are inevitably finite in size. To avoid the reflection of acoustic and other outgoing waves at the boundaries of the computation domain, it is required that special boundary conditions be imposed at the boundary region. These boundary conditions either absorb all the outgoing waves without reflection or allow the waves to exit smoothly. High-order schemes, invariably, supports spurious short waves. These spurious waves tend to pollute the numerical solution. They must be selectively damped or filtered out. All these issues and relevant computation methods are briefly reviewed. Jet screech tones are known to have caused structural fatigue in military combat aircrafts. Numerical simulation of the jet screech phenomenon is presented as an example of a successful application of CAA.
The Predictability of Near-Coastal Currents Using a Baroclinic Unstructured Grid Model
2011-12-28
clinic simulations. ADCIRC solves the time-dependent scalar transport equation for salinity and temperature. Through the equation of state...described by McDougall ct al. (2003), ADCIRC uses the temperature, salinity , and pressure in determining the density field. In order to avoid spurious...model. 2.3 Initialization and boundary forcing Temperature, salinity , elevation, and velocity fields from a regional ocean model are needed both to
Analysis of spurious oscillation modes for the shallow water and Navier-Stokes equations
Walters, R.A.; Carey, G.F.
1983-01-01
The origin and nature of spurious oscillation modes that appear in mixed finite element methods are examined. In particular, the shallow water equations are considered and a modal analysis for the one-dimensional problem is developed. From the resulting dispersion relations we find that the spurious modes in elevation are associated with zero frequency and large wave number (wavelengths of the order of the nodal spacing) and consequently are zero-velocity modes. The spurious modal behavior is the result of the finite spatial discretization. By means of an artificial compressibility and limiting argument we are able to resolve the similar problem for the Navier-Stokes equations. The relationship of this simpler analysis to alternative consistency arguments is explained. This modal approach provides an explanation of the phenomenon in question and permits us to deduce the cause of the very complex behavior of spurious modes observed in numerical experiments with the shallow water equations and Navier-Stokes equations. Furthermore, this analysis is not limited to finite element formulations, but is also applicable to finite difference formulations. ?? 1983.
NASA Technical Reports Server (NTRS)
Martin, C. Wayne; Breiner, David M.; Gupta, Kajal K. (Technical Monitor)
2004-01-01
Mathematical development and some computed results are presented for Mindlin plate and shell elements, suitable for analysis of laminated composite and sandwich structures. These elements use the conventional 3 (plate) or 5 (shell) nodal degrees of freedom, have no communicable mechanisms, have no spurious shear energy (no shear locking), have no spurious membrane energy (no membrane locking) and do not require arbitrary reduction of out-of-plane shear moduli or under-integration. Artificial out-of-plane rotational stiffnesses are added at the element level to avoid convergence problems or singularity due to flat spots in shells. This report discusses a 6-node curved triangular element and a 4-node quadrilateral element. Findings show that in regular rectangular meshes, the Martin-Breiner 6-node triangular curved shell (MB6) is approximately equivalent to the conventional 8-node quadrilateral with integration. The 4-node quadrilateral (MB4) has very good accuracy for a 4-node element, and may be preferred in vibration analysis because of narrower bandwidth. The mathematical developments used in these elements, those discussed in the seven appendices, have been applied to elements with 3, 4, 6, and 10 nodes and can be applied to other nodal configurations.
Spurious One-Month and One-Year Periods in Visual Observations of Variable Stars
NASA Astrophysics Data System (ADS)
Percy, J. R.
2015-12-01
Visual observations of variable stars, when time-series analyzed with some algorithms such as DC-DFT in vstar, show spurious periods at or close to one synodic month (29.5306 days), and also at about a year, with an amplitude of typically a few hundredths of a magnitude. The one-year periods have been attributed to the Ceraski effect, which was believed to be a physiological effect of the visual observing process. This paper reports on time-series analysis, using DC-DFT in vstar, of visual observations (and in some cases, V observations) of a large number of stars in the AAVSO International Database, initially to investigate the one-month periods. The results suggest that both the one-month and one-year periods are actually due to aliasing of the stars' very low-frequency variations, though they do not rule out very low-amplitude signals (typically 0.01 to 0.02 magnitude) which may be due to a different process, such as a physiological one. Most or all of these aliasing effects may be avoided by using a different algorithm, which takes explicit account of the window function of the data, and/or by being fully aware of the possible presence of and aliasing by very low-frequency variations.
Oscillations in Spurious States of the Associative Memory Model with Synaptic Depression
NASA Astrophysics Data System (ADS)
Murata, Shin; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato
2014-12-01
The associative memory model is a typical neural network model that can store discretely distributed fixed-point attractors as memory patterns. When the network stores the memory patterns extensively, however, the model has other attractors besides the memory patterns. These attractors are called spurious memories. Both spurious states and memory states are in equilibrium, so there is little difference between their dynamics. Recent physiological experiments have shown that the short-term dynamic synapse called synaptic depression decreases its efficacy of transmission to postsynaptic neurons according to the activities of presynaptic neurons. Previous studies revealed that synaptic depression destabilizes the memory states when the number of memory patterns is finite. However, it is very difficult to study the dynamical properties of the spurious states if the number of memory patterns is proportional to the number of neurons. We investigate the effect of synaptic depression on spurious states by Monte Carlo simulation. The results demonstrate that synaptic depression does not affect the memory states but mainly destabilizes the spurious states and induces periodic oscillations.
A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.
2015-09-01
Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less
Gain-Controlled Erbium-Doped Fiber Amplifier Using Mode-Selective Photonic Lantern
2016-01-01
schematic diagram of the MSPL integrated with the FM-EDFA is shown in Fig. 3. Two laser diodes (LDs) at λp = 976 nm are connected to the MSPL through a...to co-directionally core pump the FM-EDFA. A tunable semiconductor laser (Santec TSL-210) was used to provide the signal. An optical isolator was...placed in the signal path to avoid spurious optical reflections that could destabilize the laser . In a similar configuration, the delivered signal was
Frequency guided methods for demodulation of a single fringe pattern.
Wang, Haixia; Kemao, Qian
2009-08-17
Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America
Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo
2015-01-01
Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.
Rosenberg, Noah A; Nordborg, Magnus
2006-07-01
In linkage disequilibrium mapping of genetic variants causally associated with phenotypes, spurious associations can potentially be generated by any of a variety of types of population structure. However, mathematical theory of the production of spurious associations has largely been restricted to population structure models that involve the sampling of individuals from a collection of discrete subpopulations. Here, we introduce a general model of spurious association in structured populations, appropriate whether the population structure involves discrete groups, admixture among such groups, or continuous variation across space. Under the assumptions of the model, we find that a single common principle--applicable to both the discrete and admixed settings as well as to spatial populations--gives a necessary and sufficient condition for the occurrence of spurious associations. Using a mathematical connection between the discrete and admixed cases, we show that in admixed populations, spurious associations are less severe than in corresponding mixtures of discrete subpopulations, especially when the variance of admixture across individuals is small. This observation, together with the results of simulations that examine the relative influences of various model parameters, has important implications for the design and analysis of genetic association studies in structured populations.
Chen, I L; Chen, J T; Kuo, S R; Liang, M T
2001-03-01
Integral equation methods have been widely used to solve interior eigenproblems and exterior acoustic problems (radiation and scattering). It was recently found that the real-part boundary element method (BEM) for the interior problem results in spurious eigensolutions if the singular (UT) or the hypersingular (LM) equation is used alone. The real-part BEM results in spurious solutions for interior problems in a similar way that the singular integral equation (UT method) results in fictitious solutions for the exterior problem. To solve this problem, a Combined Helmholtz Exterior integral Equation Formulation method (CHEEF) is proposed. Based on the CHEEF method, the spurious solutions can be filtered out if additional constraints from the exterior points are chosen carefully. Finally, two examples for the eigensolutions of circular and rectangular cavities are considered. The optimum numbers and proper positions for selecting the points in the exterior domain are analytically studied. Also, numerical experiments were designed to verify the analytical results. It is worth pointing out that the nodal line of radiation mode of a circle can be rotated due to symmetry, while the nodal line of the rectangular is on a fixed position.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Lee, Jae; Iredell, Lena
2016-01-01
RCs of AIRS and MERRA-2 500 mb specific humidity agree very well in terms of spatial patterns, but MERRA-2 ARCs are larger in magnitude and show a spurious moistening globally and over Central Africa. AIRS and MERRA-2 fractional cloud cover ARCs agree less well with each other. MERRA-2 shows a spurious global mean increase in cloud cover that is not found in AIRS, including a large spurious cloud increase in Central Africa. AIRS and MERRA-2 ARCs of surface skin and surface air temperatures are all similar to each other in patterns. AIRS shows a small global warming over the 13 year period, while MERRA-2 shows a small global cooling. This difference results primarily from spurious MERRA-2 temperature trends at high latitudes and over Central Africa. These differences all contribute to the spurious negative global MERRA-2 OLR trend. AIRS Version-6 confirms that 2015 is the warmest year on record and that the Earth's surface is continuing to warm.
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
A Dual Mode BPF with Improved Spurious Response Using DGS Cells Embedded on the Ground Plane of CPW
NASA Astrophysics Data System (ADS)
Weng, Min-Hang; Ye, Chang-Sin; Hung, Cheng-Yuan; Huang, Chun-Yueh
A novel dual mode bandpass filter (BPF) with improved spurious response is presented in this letter. To obtain low insertion loss, the coupling structure using the dual mode resonator and the feeding scheme using coplanar-waveguide (CPW) are constructed on the two sides of a dielectric substrate. A defected ground structure (DGS) is designed on the ground plane of the CPW to achieve the goal of spurious suppression of the filter. The filter has been investigated numerically and experimentally. Measured results show a good agreement with the simulated analysis.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
Preservation of physical properties with Ensemble-type Kalman Filter Algorithms
NASA Astrophysics Data System (ADS)
Janjic, T.
2017-12-01
We show the behavior of the localized Ensemble Kalman filter (EnKF) with respect to preservation of positivity, conservation of mass, energy and enstrophy in toy models that conserve these properties. In order to preserve physical properties in the analysis as well as to deal with the non-Gaussianity in an EnKF framework, Janjic et al. 2014 proposed the use of physically based constraints in the analysis step to constrain the solution. In particular, constraints were used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In the study, mass and positivity were both preserved by formulating the filter update as a set of quadratic programming problems that incorporate nonnegativity constraints. Simple numerical experiments indicated that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that were more physically plausible both for individual ensemble members and for the ensemble mean. Moreover, in experiments designed to mimic the most important characteristics of convective motion, it is shown that the mass conservation- and positivity-constrained rain significantly suppresses noise seen in localized EnKF results. This is highly desirable in order to avoid spurious storms from appearing in the forecast starting from this initial condition (Lange and Craig 2014). In addition, the root mean square error is reduced for all fields and total mass of the rain is correctly simulated. Similarly, the enstrophy, divergence, as well as energy spectra can as well be strongly affected by localization radius, thinning interval, and inflation and depend on the variable that is observed (Zeng and Janjic, 2016). We constructed the ensemble data assimilation algorithm that conserves mass, total energy and enstrophy (Zeng et al., 2017). With 2D shallow water model experiments, it is found that the conservation of enstrophy within the data assimilation effectively avoids the spurious energy cascade of rotational part and thereby successfully suppresses the noise generated by the data assimilation algorithm. The 14-day deterministic and ensemble free forecast, starting from the initial condition enforced by both total energy and enstrophy constraints, produces the best prediction.
Locating Microseism Sources Using Spurious Arrivals in Intercontinental Noise Correlations
NASA Astrophysics Data System (ADS)
Retailleau, Lise; Boué, Pierre; Stehly, Laurent; Campillo, Michel
2017-10-01
The accuracy of Green's functions retrieved from seismic noise correlations in the microseism frequency band is limited by the uneven distribution of microseism sources at the surface of the Earth. As a result, correlation functions are often biased as compared to the expected Green's functions, and they can include spurious arrivals. These spurious arrivals are seismic arrivals that are visible on the correlation and do not belong to the theoretical impulse response. In this article, we propose to use Rayleigh wave spurious arrivals detected on correlation functions computed between European and United States seismic stations to locate microseism sources in the Atlantic Ocean. We perform a slant stack on a time distance gather of correlations obtained from an array of stations that comprises a regional deployment and a distant station. The arrival times and the apparent slowness of the spurious arrivals lead to the location of their source, which is obtained through a grid search procedure. We discuss improvements in the location through this methodology as compared to classical back projection of microseism energy. This method is interesting because it only requires an array and a distant station on each side of an ocean, conditions that can be met relatively easily.
Chou, Chia-Chun; Kouri, Donald J
2013-04-25
We show that there exist spurious states for the sector two tensor Hamiltonian in multidimensional supersymmetric quantum mechanics. For one-dimensional supersymmetric quantum mechanics on an infinite domain, the sector one and two Hamiltonians have identical spectra with the exception of the ground state of the sector one. For tensorial multidimensional supersymmetric quantum mechanics, there exist normalizable spurious states for the sector two Hamiltonian with energy equal to the ground state energy of the sector one. These spurious states are annihilated by the adjoint charge operator, and hence, they do not correspond to physical states for the original Hamiltonian. The Hermitian property of the sector two Hamiltonian implies the orthogonality between spurious and physical states. In addition, we develop a method for construction of a specific form of the spurious states for any quantum system and also generate several spurious states for a two-dimensional anharmonic oscillator system and for the hydrogen atom.
NASA Technical Reports Server (NTRS)
Kast, J. W.
1975-01-01
We consider the design of a Kirkpatrick-Baez grazing-incidence X-ray telescope to be used in a scan of the sky and analyze the distribution of both properly reflected rays and spurious images over the field of view. To obtain maximum effective area over the field of view, it is necessary to increase the spacing between plates for a scanning telescope as compared to a pointing telescope. Spurious images are necessarily present in this type of lens, but they can be eliminated from the field of view by adding properly located baffles or collimators. Results of a computer design are presented.
Attenuation of spurious responses in electromechanical filters
Olsson, Roy H.; Hietala, Vincent M.
2018-04-10
A spur cancelling, electromechanical filter includes a first resonator having a first resonant frequency and one or more first spurious responses, and it also includes, electrically connected to the first resonator, a second resonator having a second resonant frequency and one or more second spurious responses. The first and second resonant frequencies are approximately identical, but the first resonator is physically non-identical to the second resonator. The difference between the resonators makes the respective spurious responses different. This allows for filters constructed from a cascade of these resonators to exhibit reduced spurious responses.
Generation of constant-amplitude radio-frequency sweeps at a tunnel junction for spin resonance STM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, William; Lutz, Christopher P.; Heinrich, Andreas J.
2016-07-15
We describe the measurement and successful compensation of the radio-frequency transfer function of a scanning tunneling microscope over a wide frequency range (15.5–35.5 GHz) and with high dynamic range (>50 dB). The precise compensation of cabling resonances and attenuations is critical for the production of constant-voltage frequency sweeps for electric-field driven electron spin resonance (ESR) experiments. We also demonstrate that a well-calibrated tunnel junction voltage is necessary to avoid spurious ESR peaks that can arise due to a non-flat transfer function.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Kumar Myakalwar, Ashwin; Spegazzini, Nicolas; Zhang, Chi; Kumar Anubham, Siva; Dasari, Ramachandra R; Barman, Ishan; Kumar Gundawar, Manoj
2015-08-19
Despite its intrinsic advantages, translation of laser induced breakdown spectroscopy for material identification has been often impeded by the lack of robustness of developed classification models, often due to the presence of spurious correlations. While a number of classifiers exhibiting high discriminatory power have been reported, efforts in establishing the subset of relevant spectral features that enable a fundamental interpretation of the segmentation capability and avoid the 'curse of dimensionality' have been lacking. Using LIBS data acquired from a set of secondary explosives, we investigate judicious feature selection approaches and architect two different chemometrics classifiers -based on feature selection through prerequisite knowledge of the sample composition and genetic algorithm, respectively. While the full spectral input results in classification rate of ca.92%, selection of only carbon to hydrogen spectral window results in near identical performance. Importantly, the genetic algorithm-derived classifier shows a statistically significant improvement to ca. 94% accuracy for prospective classification, even though the number of features used is an order of magnitude smaller. Our findings demonstrate the impact of rigorous feature selection in LIBS and also hint at the feasibility of using a discrete filter based detector thereby enabling a cheaper and compact system more amenable to field operations.
Kumar Myakalwar, Ashwin; Spegazzini, Nicolas; Zhang, Chi; Kumar Anubham, Siva; Dasari, Ramachandra R.; Barman, Ishan; Kumar Gundawar, Manoj
2015-01-01
Despite its intrinsic advantages, translation of laser induced breakdown spectroscopy for material identification has been often impeded by the lack of robustness of developed classification models, often due to the presence of spurious correlations. While a number of classifiers exhibiting high discriminatory power have been reported, efforts in establishing the subset of relevant spectral features that enable a fundamental interpretation of the segmentation capability and avoid the ‘curse of dimensionality’ have been lacking. Using LIBS data acquired from a set of secondary explosives, we investigate judicious feature selection approaches and architect two different chemometrics classifiers –based on feature selection through prerequisite knowledge of the sample composition and genetic algorithm, respectively. While the full spectral input results in classification rate of ca.92%, selection of only carbon to hydrogen spectral window results in near identical performance. Importantly, the genetic algorithm-derived classifier shows a statistically significant improvement to ca. 94% accuracy for prospective classification, even though the number of features used is an order of magnitude smaller. Our findings demonstrate the impact of rigorous feature selection in LIBS and also hint at the feasibility of using a discrete filter based detector thereby enabling a cheaper and compact system more amenable to field operations. PMID:26286630
Residual Negative Pressure in Vacuum Tubes Might Increase the Risk of Spurious Hemolysis.
Xiao, Tong-Tong; Zhang, Qiao-Xin; Hu, Jing; Ouyang, Hui-Zhen; Cai, Ying-Mu
2017-05-01
We planned a study to establish whether spurious hemolysis may occur when negative pressure remains in vacuum tubes. Four tubes with different vacuum levels (-54, -65, -74, and -86 kPa) were used to examine blood drawn from one healthy volunteer; the tubes were allowed to stand for different times (1, 2, 3, and 4 hours). The plasma was separated and immediately tested for free hemoglobin (FHb). Thirty patients were enrolled in a verification experiment. The degree of hemolysis observed was greater when the remaining negative pressure was higher. Significant differences were recorded in the verification experiment. The results suggest that residual negative pressure might increase the risk of spurious hemolysis.
Smith, Geoff; Murray, Heather; Brennan, Stephen O
2013-01-01
Commonly used methods for assay of haemoglobin A(1c) (HbA(1c)) are susceptible to interference from the presence of haemoglobin variants. In many systems, the common variants can be identified but scientists and pathologists must remain vigilant for more subtle variants that may result in spuriously high or low HbA(1c) values. It is clearly important to recognize these events whether HbA(1c) is being used as a monitoring tool or, as is increasingly the case, for diagnostic purposes. We report a patient with a rare haemoglobin variant (Hb Sinai-Baltimore) that resulted in spuriously low values of HbA(1c) when assayed using ion exchange chromatography, and the steps taken to elucidate the nature of the variant.
Use of a running coupling in the NLO calculation of forward hadron production
NASA Astrophysics Data System (ADS)
Ducloué, B.; Iancu, E.; Lappi, T.; Mueller, A. H.; Soyez, G.; Triantafyllopoulos, D. N.; Zhu, Y.
2018-03-01
We address and solve a puzzle raised by a recent calculation [1] of the cross section for particle production in proton-nucleus collisions to next-to-leading order: the numerical results show an unreasonably large dependence upon the choice of a prescription for the QCD running coupling, which spoils the predictive power of the calculation. Specifically, the results obtained with a prescription formulated in the transverse coordinate space differ by 1 to 2 orders of magnitude from those obtained with a prescription in momentum space. We show that this discrepancy is an artifact of the interplay between the asymptotic freedom of QCD and the Fourier transform from coordinate space to momentum space. When used in coordinate space, the running coupling can act as a fictitious potential which mimics hard scattering and thus introduces a spurious contribution to the cross section. We identify a new coordinate-space prescription, which avoids this problem, and leads to results consistent with those obtained with the momentum-space prescription.
Nonlinear filtering properties of detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-11-01
Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.
NASA Astrophysics Data System (ADS)
Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair
2017-11-01
We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.
Miniature quadrupole mass spectrometer having a cold cathode ionization source
Felter, Thomas E.
2002-01-01
An improved quadrupole mass spectrometer is described. The improvement lies in the substitution of the conventional hot filament electron source with a cold cathode field emitter array which in turn allows operating a small QMS at much high internal pressures then are currently achievable. By eliminating of the hot filament such problems as thermally "cracking" delicate analyte molecules, outgassing a "hot" filament, high power requirements, filament contamination by outgas species, and spurious em fields are avoid all together. In addition, the ability of produce FEAs using well-known and well developed photolithographic techniques, permits building a QMS having multiple redundancies of the ionization source at very low additional cost.
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
Charge exchange collisions of slow C6 + with atomic and molecular H
NASA Astrophysics Data System (ADS)
Saha, Bidhan C.; Guevara, Nicolais L.; Sabin, John R.; Deumens, Erik; Öhrn, Yngve
2016-04-01
Charge exchange in collisions of C6+ ions with H and H2 is investigated theoretically at projectile energies 0.1 < E < 10 keV/amu, using electron nuclear dynamics (END) - a semi-classical approximation which not only includes electron translation factors for avoiding spurious couplings but also employs full dynamical trajectories to treat nuclear motions. Both the total and partial cross sections are reported for the collision of C6+ ions with atomic and molecular hydrogen. A comparison with other theoretical and experimental results shows, in general good agreement except at very low energy, considered here. For H2, the one- and two-electron charge exchange cross sections are calculated and compared with other theoretical and experimental results. Small but non-negligible isotope effects are found at the lowest energy studied in the charge transfer of C6+ with H. In low energy region, it is observed that H2 has larger isotope effects than H atom due to the polarizability effect which is larger than the mass effect.
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Measurements required: Field strength of spurious radiation. 2.1053 Section 2.1053 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1053 Measurements required: Field strength of spurious radiation. (a...
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Measurements required: Field strength of spurious radiation. 2.1053 Section 2.1053 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1053 Measurements required: Field strength of spurious radiation. (a...
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Measurements required: Field strength of spurious radiation. 2.1053 Section 2.1053 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1053 Measurements required: Field strength of spurious radiation. (a...
Rusterholz, Thomas; Achermann, Peter; Dürr, Roland; Koenig, Thomas; Tarokh, Leila
2017-06-01
Investigating functional connectivity between brain networks has become an area of interest in neuroscience. Several methods for investigating connectivity have recently been developed, however, these techniques need to be applied with care. We demonstrate that global field synchronization (GFS), a global measure of phase alignment in the EEG as a function of frequency, must be applied considering signal processing principles in order to yield valid results. Multichannel EEG (27 derivations) was analyzed for GFS based on the complex spectrum derived by the fast Fourier transform (FFT). We examined the effect of window functions on GFS, in particular of non-rectangular windows. Applying a rectangular window when calculating the FFT revealed high GFS values for high frequencies (>15Hz) that were highly correlated (r=0.9) with spectral power in the lower frequency range (0.75-4.5Hz) and tracked the depth of sleep. This turned out to be spurious synchronization. With a non-rectangular window (Tukey or Hanning window) these high frequency synchronization vanished. Both, GFS and power density spectra significantly differed for rectangular and non-rectangular windows. Previous papers using GFS typically did not specify the applied window and may have used a rectangular window function. However, the demonstrated impact of the window function raises the question of the validity of some previous findings at higher frequencies. We demonstrated that it is crucial to apply an appropriate window function for determining synchronization measures based on a spectral approach to avoid spurious synchronization in the beta/gamma range. Copyright © 2017 Elsevier B.V. All rights reserved.
Direct Numerical Simulation of Low Capillary Number Pore Scale Flows
NASA Astrophysics Data System (ADS)
Esmaeilzadeh, S.; Soulaine, C.; Tchelepi, H.
2017-12-01
The arrangement of void spaces and the granular structure of a porous medium determines multiple macroscopic properties of the rock such as porosity, capillary pressure, and relative permeability. Therefore, it is important to study the microscopic structure of the reservoir pores and understand the dynamics of fluid displacements through them. One approach for doing this, is direct numerical simulation of pore-scale flow that requires a robust numerical tool for prediction of fluid dynamics and a detailed understanding of the physical processes occurring at the pore-scale. In pore scale flows with a low capillary number, Eulerian multiphase methods are well-known to produce additional vorticity close to the interface. This is mainly due to discretization errors which lead to an imbalance of capillary pressure and surface tension forces that causes unphysical spurious currents. At the pore scale, these spurious currents can become significantly stronger than the average velocity in the phases, and lead to unphysical displacement of the interface. In this work, we first investigate the capability of the algebraic Volume of Fluid (VOF) method in OpenFOAM for low capillary number pore scale flow simulations. Afterward, we compare VOF results with a Coupled Level-Set Volume of Fluid (CLSVOF) method and Iso-Advector method. It has been shown that the former one reduces the VOF's unphysical spurious currents in some cases, and both are known to capture interfaces sharper than VOF. As the conclusion, we will investigate that whether the use of CLSVOF or Iso-Advector will lead to less spurious velocities and more accurate results for capillary driven pore-scale multiphase flows or not. Keywords: Pore-scale multiphase flow, Capillary driven flows, Spurious currents, OpenFOAM
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1995-01-01
The global asymptotic nonlinear behavior of 11 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDEs.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1995-01-01
The global asymptotic nonlinear behavior of 1 1 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODES) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDES.
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
Gene Unprediction with Spurio: A tool to identify spurious protein sequences.
Höps, Wolfram; Jeffryes, Matt; Bateman, Alex
2018-01-01
We now have access to the sequences of tens of millions of proteins. These protein sequences are essential for modern molecular biology and computational biology. The vast majority of protein sequences are derived from gene prediction tools and have no experimental supporting evidence for their translation. Despite the increasing accuracy of gene prediction tools there likely exists a large number of spurious protein predictions in the sequence databases. We have developed the Spurio tool to help identify spurious protein predictions in prokaryotes. Spurio searches the query protein sequence against a prokaryotic nucleotide database using tblastn and identifies homologous sequences. The tblastn matches are used to score the query sequence's likelihood of being a spurious protein prediction using a Gaussian process model. The most informative feature is the appearance of stop codons within the presumed translation of homologous DNA sequences. Benchmarking shows that the Spurio tool is able to distinguish spurious from true proteins. However, transposon proteins are prone to be predicted as spurious because of the frequency of degraded homologs found in the DNA sequence databases. Our initial experiments suggest that less than 1% of the proteins in the UniProtKB sequence database are likely to be spurious and that Spurio is able to identify over 60 times more spurious proteins than the AntiFam resource. The Spurio software and source code is available under an MIT license at the following URL: https://bitbucket.org/bateman-group/spurio.
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
Spurious-Mode Control of Same-Phase Drive-Type Ultrasonic Motor
NASA Astrophysics Data System (ADS)
Aoyagi, Manabu; Watanabe, Hiroyuki; Tomikawa, Yoshiro; Takano, Takehiro
2002-05-01
A same-phase drive-type ultrasonic motor requires a single power source for its operation. In particular, self-oscillation driving is useful for driving a small ultrasonic motor. This type of ultrasonic motor has a spurious mode close to the operation frequency on its stator vibrator. The spurious vibration mode affects the oscillation frequency of a self-oscillation drive circuit. Hence the spurious vibration mode should be restrained or moved away from the neighborhood of the operation frequency. In this paper, we report that an inductor connected at an electrical control terminal provided on standby electrodes for the reverse rotation operation controls only the spurious vibration mode. The effect of an inductor connected at the control terminal was clarified by the simulation of an equivalent circuit and some experiments.
Spurious states in boson calculations — spectre or reality?
NASA Astrophysics Data System (ADS)
Navrátil, P.; Geyer, H. B.; Dobeš, J.; Dobaczewski, J.
1994-03-01
We discuss some prevailing misconceptions about the possibility that spurious states may in general contaminate boson calculations of fermion systems on either the phenomenological or microscopic level. Amongst other things we point out that the possible appearance of spurious states is not inherently a mapping problem, but rather linked to a choice of basis in the boson Fock space. This choice is mostly dictated by convenience or the aim to make direct contact with phenomenology. Furthermore, neither well established collectivity, nor the construction of boson operators in the Marumori or OAI fashion can as such serve as a guarantee against the appearance of spurious boson states. Within an SO(12) generalisation of the Ginocchio model where collective decoupling is complete, we illustrate how spurious states may appear in an IBM-type sdg-boson analysis. We also show how these states may be identified on the boson level. This enables us to present an example of an sdg-spectrum which, although it may be reasonably correlated with experimental data, nevertheless has the first few low lying states all spurious when interpreted from the microscopic point of view. We briefly speculate about the possibility that certain types of truncation may in fact automatically circumvent the appearance of spurious states.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2017-07-01
When designing a numerical scheme for the resolution of conservation laws, the selection of a particular source term discretization (STD) may seem irrelevant whenever it ensures convergence with mesh refinement, but it has a decisive impact on the solution. In the framework of the Shallow Water Equations (SWE), well-balanced STD based on quiescent equilibrium are unable to converge to physically based solutions, which can be constructed considering energy arguments. Energy based discretizations can be designed assuming dissipation or conservation, but in any case, the STD procedure required should not be merely based on ad hoc approximations. The STD proposed in this work is derived from the Generalized Hugoniot Locus obtained from the Generalized Rankine Hugoniot conditions and the Integral Curve across the contact wave associated to the bed step. In any case, the STD must allow energy-dissipative solutions: steady and unsteady hydraulic jumps, for which some numerical anomalies have been documented in the literature. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump. The former issue can be addressed by proposing a modification of the energy-conservative STD that ensures a correct dissipation rate across the hydraulic jump, whereas the latter is of greater complexity and cannot be fixed by simply choosing a suitable STD, as there are more variables involved. The problem concerning the spike of discharge is a well-known problem in the scientific community, also known as slowly-moving shock anomaly, it is produced by a nonlinearity of the Hugoniot locus connecting the states at both sides of the jump. However, it seems that this issue is more a feature than a problem when considering steady solutions of the SWE containing hydraulic jumps. The presence of the spurious spike in the discharge has been taken for granted and has become a feature of the solution. Even though it does not disturb the rest of the solution in steady cases, when considering transient cases it produces a very undesirable shedding of spurious oscillations downstream that should be circumvented. Based on spike-reducing techniques (originally designed for homogeneous Euler equations) that propose the construction of interpolated fluxes in the untrustworthy regions, we design a novel Roe-type scheme for the SWE with discontinuous topography that reduces the presence of the aforementioned spurious spike. The resulting spike-reducing method in combination with the proposed STD ensures an accurate positioning of steady jumps, provides convergence with mesh refinement, which was not possible for previous methods that cannot avoid the spike.
Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference
Simpson, James E.
1999-01-01
An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp.
Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference
Simpson, J.E.
1999-06-08
An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp. 18 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habel, R.; Letardi, T.
1963-10-30
In some studies with scintillation chambers, the problem of discriminating between the events generated by one or more ionizing particles and a spontaneous shower between the gaps of the chamber is presented. One element of difference between the two events is the delay of the spurious scintillation with respect to that produced by passage of a particle. The use of a fast shutter whose open time is of the order of the delay would provide a possible method for the discrimination between true and spurious events. The experimental apparatus used and the types of measurements made to determine if suchmore » a shutter arrangement would be feasible are described. (J.S.R.)« less
NASA Astrophysics Data System (ADS)
Giono, G.; Ishikawa, R.; Narukage, N.; Kano, R.; Katsukawa, Y.; Kubo, M.; Ishikawa, S.; Bando, T.; Hara, H.; Suematsu, Y.; Winebarger, A.; Kobayashi, K.; Auchère, F.; Trujillo Bueno, J.; Tsuneta, S.; Shimizu, T.; Sakao, T.; Cirtain, J.; Champey, P.; Asensio Ramos, A.; Štěpán, J.; Belluzzi, L.; Manso Sainz, R.; De Pontieu, B.; Ichimoto, K.; Carlsson, M.; Casini, R.; Goto, M.
2017-04-01
The Chromospheric Lyman-Alpha SpectroPolarimeter is a sounding rocket instrument designed to measure for the first time the linear polarization of the hydrogen Lyman-{α} line (121.6 nm). The instrument was successfully launched on 3 September 2015 and observations were conducted at the solar disc center and close to the limb during the five-minutes flight. In this article, the disc center observations are used to provide an in-flight calibration of the instrument spurious polarization. The derived in-flight spurious polarization is consistent with the spurious polarization levels determined during the pre-flight calibration and a statistical analysis of the polarization fluctuations from solar origin is conducted to ensure a 0.014% precision on the spurious polarization. The combination of the pre-flight and the in-flight polarization calibrations provides a complete picture of the instrument response matrix, and a proper error transfer method is used to confirm the achieved polarization accuracy. As a result, the unprecedented 0.1% polarization accuracy of the instrument in the vacuum ultraviolet is ensured by the polarization calibration.
The spurious response of microwave photonic mixer
NASA Astrophysics Data System (ADS)
Xiao, Yongchuan; Zhong, Guoshun; Qu, Pengfei; Sun, Lijun
2018-02-01
Microwave photonic mixer is a potential solution for wideband information systems due to the ultra-wide operating bandwidth, high LO-to-RF isolation, the intrinsic immunity to electromagnetic interference, and the compatibility with exsiting microwave photonic transmission systems. The spurious response of microwave photonic mixer cascading in series a pair of Mach-Zehnder interferometric intensity modulators has been simulated and analyzed in this paper. The low order spurious products caused by the nonlinearity of modulators are non-negligible, and the proper IF frequency and accurate bias-controlling are of great importance to mitigate the impact of spurious products.
Method for analyzing the mass of a sample using a cold cathode ionization source mass filter
Felter, Thomas E.
2003-10-14
An improved quadrupole mass spectrometer is described. The improvement lies in the substitution of the conventional hot filament electron source with a cold cathode field emitter array which in turn allows operating a small QMS at much high internal pressures then are currently achievable. By eliminating of the hot filament such problems as thermally "cracking" delicate analyte molecules, outgassing a "hot" filament, high power requirements, filament contamination by outgas species, and spurious em fields are avoid all together. In addition, the ability of produce FEAs using well-known and well developed photolithographic techniques, permits building a QMS having multiple redundancies of the ionization source at very low additional cost.
Ahmad, Azeem; Dubey, Vishesh; Singh, Gyanendra; Singh, Veena; Mehta, Dalip Singh
2016-04-01
In this Letter, we demonstrate quantitative phase imaging of biological samples, such as human red blood cells (RBCs) and onion cells using narrow temporal frequency and wide angular frequency spectrum light source. This type of light source was synthesized by the combined effect of spatial, angular, and temporal diversity of speckle reduction technique. The importance of using low spatial and high temporal coherence light source over the broad band and narrow band light source is that it does not require any dispersion compensation mechanism for biological samples. Further, it avoids the formation of speckle or spurious fringes which arises while using narrow band light source.
On pads and filters: Processing strong-motion data
Boore, D.M.
2005-01-01
Processing of strong-motion data in many cases can be as straightforward as filtering the acceleration time series and integrating to obtain velocity and displacement. To avoid the introduction of spurious low-frequency noise in quantities derived from the filtered accelerations, however, care must be taken to append zero pads of adequate length to the beginning and end of the segment of recorded data. These padded sections of the filtered acceleration need to be retained when deriving velocities, displacements, Fourier spectra, and response spectra. In addition, these padded and filtered sections should also be included in the time series used in the dynamic analysis of structures and soils to ensure compatibility with the filtered accelerations.
NASA Astrophysics Data System (ADS)
Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus
2016-04-01
We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.
Phase space flows for non-Hamiltonian systems with constraints
NASA Astrophysics Data System (ADS)
Sergi, Alessandro
2005-09-01
In this paper, non-Hamiltonian systems with holonomic constraints are treated by a generalization of Dirac’s formalism. Non-Hamiltonian phase space flows can be described by generalized antisymmetric brackets or by general Liouville operators which cannot be derived from brackets. Both situations are treated. In the first case, a Nosé-Dirac bracket is introduced as an example. In the second one, Dirac’s recipe for projecting out constrained variables from time translation operators is generalized and then applied to non-Hamiltonian linear response. Dirac’s formalism avoids spurious terms in the response function of constrained systems. However, corrections coming from phase space measure must be considered for general perturbations.
Independent bases on the spatial wavefunction of four-identical-particle systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Shuyuan; Deng, Zhixuan; Chen, Hong
2013-12-15
We construct the independent bases on the spatial wavefunction of four-identical-particle systems classified under the rotational group SO(3) and the permutation group S{sub 4} with the usage of transformation coefficients that relate wavefunctions described in one set of internal coordinates with those in another. The basis functions for N⩽ 2 are presented in the explicit expressions based on the harmonic oscillator model. Such independent bases are supposed to play a key role in the construction of the wavefunctions of the five-quark states and the variation calculation of four-body systems. Our prescription avoids the spurious states and can be programmed formore » arbitrary N.« less
Parkes radio science system design and testing for Voyager Neptune encounter
NASA Technical Reports Server (NTRS)
Rebold, T. A.; Weese, J. F.
1989-01-01
The Radio Science System installed at Parkes, Australia for the Voyager Neptune encounter was specified to meet the same stringent requirements that were imposed upon the Deep Space Network Radio Science System. The system design and test methodology employed to meet these requirements at Parkes are described, and data showing the measured performance of the system are presented. The results indicate that the system operates with a comfortable margin on the requirements. There was a minor problem with frequency-dependent spurious signals which could not be fixed before the encounter. Test results characterizing these spurious signals are included.
NASA Astrophysics Data System (ADS)
Nemati, Maedeh; Shateri Najaf Abady, Ali Reza; Toghraie, Davood; Karimipour, Arash
2018-01-01
The incorporation of different equations of state into single-component multiphase lattice Boltzmann model is considered in this paper. The original pseudopotential model is first detailed, and several cubic equations of state, the Redlich-Kwong, Redlich-Kwong-Soave, and Peng-Robinson are then incorporated into the lattice Boltzmann model. A comparison of the numerical simulation achievements on the basis of density ratios and spurious currents is used for presentation of the details of phase separation in these non-ideal single-component systems. The paper demonstrates that the scheme for the inter-particle interaction force term as well as the force term incorporation method matters to achieve more accurate and stable results. The velocity shifting method is demonstrated as the force term incorporation method, among many, with accuracy and stability results. Kupershtokh scheme also makes it possible to achieve large density ratio (up to 104) and to reproduce the coexistence curve with high accuracy. Significant reduction of the spurious currents at vapor-liquid interface is another observation. High-density ratio and spurious current reduction resulted from the Redlich-Kwong-Soave and Peng-Robinson EOSs, in higher accordance with the Maxwell construction results.
Application of nomographs for analysis and prediction of receiver spurious response EMI
NASA Astrophysics Data System (ADS)
Heather, F. W.
1985-07-01
Spurious response EMI for the front end of a superheterodyne receiver follows a simple mathematic formula; however, the application of the formula to predict test frequencies produces more data than can be evaluated. An analysis technique has been developed to graphically depict all receiver spurious responses usig a nomograph and to permit selection of optimum test frequencies. The discussion includes the math model used to simulate a superheterodyne receiver, the implementation of the model in the computer program, the approach to test frequency selection, interpretation of the nomographs, analysis and prediction of receiver spurious response EMI from the nomographs, and application of the nomographs. In addition, figures are provided of sample applications. This EMI analysis and prediction technique greatly improves the Electromagnetic Compatibility (EMC) test engineer's ability to visualize the scope of receiver spurious response EMI testing and optimize test frequency selection.
Cancellation of spurious arrivals in Green's function extraction and the generalized optical theorem
Snieder, R.; Van Wijk, K.; Haney, M.; Calvert, R.
2008-01-01
The extraction of the Green's function by cross correlation of waves recorded at two receivers nowadays finds much application. We show that for an arbitrary small scatterer, the cross terms of scattered waves give an unphysical wave with an arrival time that is independent of the source position. This constitutes an apparent inconsistency because theory predicts that such spurious arrivals do not arise, after integration over a complete source aperture. This puzzling inconsistency can be resolved for an arbitrary scatterer by integrating the contribution of all sources in the stationary phase approximation to show that the stationary phase contributions to the source integral cancel the spurious arrival by virtue of the generalized optical theorem. This work constitutes an alternative derivation of this theorem. When the source aperture is incomplete, the spurious arrival is not canceled and could be misinterpreted to be part of the Green's function. We give an example of how spurious arrivals provide information about the medium complementary to that given by the direct and scattered waves; the spurious waves can thus potentially be used to better constrain the medium. ?? 2008 The American Physical Society.
Current Scenario of Spurious and Substandard Medicines in India: A Systematic Review
Khan, A. N.; Khar, R. K.
2015-01-01
Globally, every country is the victim of substandard or spurious drugs, which result in life threatening issues, financial loss of consumer and manufacturer and loss in trust on health system. The aim of this enumerative review was to probe the extent on poor quality drugs with their consequences on public health and the preventive measures taken by the Indian pharmaceutical regulatory system. Government and non-government studies, literature and news were gathered from journals and authentic websites. All data from 2000 to 2013 were compiled and interpreted to reveal the real story of poor quality drugs in India. For minimizing spurious/falsely-labelled/falsified/counterfeit drugs or not of standard quality drugs, there is urgent requirement of more stringent regulation and legal action against the problem. However, India has taken some preventive steps in the country to fight against the poor quality drugs for protecting and promoting the public health. PMID:25767312
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.
2013-08-01
This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also,more » the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.« less
Apodization of spurs in radar receivers using multi-channel processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.
The various technologies presented herein relate to identification and mitigation of spurious energies or signals (aka "spurs") in radar imaging. Spurious energy in received radar data can be a consequence of non-ideal component and circuit behavior. Such behavior can result from I/Q imbalance, nonlinear component behavior, additive interference (e.g. cross-talk, etc.), etc. The manifestation of the spurious energy in a radar image (e.g., a range-Doppler map) can be influenced by appropriate pulse-to-pulse phase modulation. Comparing multiple images which have been processed using the same data but of different signal paths and modulations enables identification of undesired spurs, with subsequent croppingmore » or apodization of the undesired spurs from a radar image. Spurs can be identified by comparison with a threshold energy. Removal of an undesired spur enables enhanced identification of true targets in a radar image.« less
NASA Astrophysics Data System (ADS)
Ji, Xiaojun; Xiao, Qiang; Chen, Jing; Wang, Hualei; Omori, Tatsuya; Changjun, Ahn
2017-05-01
The propagation characteristics of surface acoustic waves (SAWs) on rotated Y-cut X-propagating 0.67Pb(Mg1/3Nb2/3)O3-0.33PbTiO3(PMN-33%PT) substrate are theoretically analyzed. It is shown that besides the existence of a shear horizontal (SH) SAW with ultrahigh electromechanical coupling factor K2, a Rayleigh SAW also exists causing strong spurious response. The calculated results showed that the spurious Rayleigh SAW can be sufficiently suppressed by properly selecting electrode and its thickness with optimized rotating angle while maintaining large K2 of SH SAW. The fractional -3 dB bandwidth of 47% is achievable for the ladder type filter constructed by Au IDT/48oYX-PMN-33%PT resonators.
Global Warming Estimation From Microwave Sounding Unit
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Dalu, G.
1998-01-01
Microwave Sounding Unit (MSU) Ch 2 data sets, collected from sequential, polar-orbiting, Sun-synchronous National Oceanic and Atmospheric Administration operational satellites, contain systematic calibration errors that are coupled to the diurnal temperature cycle over the globe. Since these coupled errors in MSU data differ between successive satellites, it is necessary to make compensatory adjustments to these multisatellite data sets in order to determine long-term global temperature change. With the aid of the observations during overlapping periods of successive satellites, we can determine such adjustments and use them to account for the coupled errors in the long-term time series of MSU Ch 2 global temperature. In turn, these adjusted MSU Ch 2 data sets can be used to yield global temperature trend. In a pioneering study, Spencer and Christy (SC) (1990) developed a procedure to derive the global temperature trend from MSU Ch 2 data. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedure, the magnitude of the coupled errors is not determined explicitly. Furthermore, based on some assumptions, these coupled errors are eliminated in three separate steps. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedures. Based on our analysis, we find there is a global warming of 0.23+/-0.12 K between 1980 and 1991. Also, in this study, the time series of global temperature anomalies constructed by removing the global mean annual temperature cycle compares favorably with a similar time series obtained from conventional observations of temperature.
Lahey, Benjamin B; Van Hulle, Carol A; D'Onofrio, Brian M; Rodgers, Joseph Lee; Waldman, Irwin D
2008-08-01
Recent studies suggest that most of what parents know about their adolescent offspring's whereabouts and companions is the result of youth disclosure, rather than information gained through active parental monitoring. This raises the possibility that parental knowledge is spuriously correlated with youth delinquency solely because the most delinquent youth disclose the least information to parents (because they have the most to hide). We tested this spurious association hypothesis using prospective data on offspring of a nationally representative sample of US women, controlling demographic and contextual covariates. In separate analyses, greater parental knowledge of their offspring's peer associations at both 12-13 years and at 14-15 years was associated with lower odds of being in the top 1 standard deviation of youth-reported delinquency at 16-17 years, controlling for delinquency at the earlier ages. The extent to which parents set limits on activities with peers at 14-15 years did not mediate or moderate the association between parental knowledge and delinquency, but it did independently predict future delinquency among adolescents living in high-risk neighborhoods. This suggests that the association between parental knowledge and future delinquency is not solely spurious; rather parental knowledge and limit setting are both meaningful predictors of future delinquency.
Godon, Alban; Genevieve, Franck; Marteau-Tessier, Anne; Zandecki, Marc
2012-01-01
Several situations lead to abnormal haemoglobin measurement or to abnormal red blood cells (RBC) counts, including hyperlipemias, agglutinins and cryoglobulins, haemolysis, or elevated white blood cells (WBC) counts. Mean (red) cell volume may be also subject to spurious determination, because of agglutinins (mainly cold), high blood glucose level, natremia, anticoagulants in excess and at times technological considerations. Abnormality related to one measured parameter eventually leads to abnormal calculated RBC indices: mean cell haemoglobin content is certainly the most important RBC parameter to consider, maybe as important as flags generated by the haematology analysers (HA) themselves. In many circumstances, several of the measured parameters from cell blood counts (CBC) may be altered, and the discovery of a spurious change on one parameter frequently means that the validity of other parameters should be considered. Sensitive flags allow now the identification of several spurious counts, but only the most sophisticated HA have optimal flagging, and simpler ones, especially those without any WBC differential scattergram, do not share the same capacity to detect abnormal results. Reticulocytes are integrated into the CBC in many HA, and several situations may lead to abnormal counts, including abnormal gating, interference with intraerythrocytic particles, erythroblastosis or high WBC counts.
NASA Astrophysics Data System (ADS)
Awai, Ikuo
A new comprehensive method to suppress the spurious modes in a BPF is proposed taking the multi-strip resonator BPF as an example. It consists of disturbing the resonant frequency, coupling coefficient and external Q of the higher-order modes at the same time. The designed example has shown an extraordinarily good out-of-band response in the computer simulation.
Spurious RF signals emitted by mini-UAVs
NASA Astrophysics Data System (ADS)
Schleijpen, Ric (H. M. A.); Voogt, Vincent; Zwamborn, Peter; van den Oever, Jaap
2016-10-01
This paper presents experimental work on the detection of spurious RF emissions of mini Unmanned Aerial Vehicles (mini-UAV). Many recent events have shown that mini-UAVs can be considered as a potential threat for civil security. For this reason the detection of mini-UAVs has become of interest to the sensor community. The detection, classification and identification chain can take advantage of different sensor technologies. Apart from the signatures used by radar and electro-optical sensor systems, the UAV also emits RF signals. These RF signatures can be split in intentional signals for communication with the operator and un-intentional RF signals emitted by the UAV. These unintentional or spurious RF emissions are very weak but could be used to discriminate potential UAV detections from false alarms. The goal of this research was to assess the potential of exploiting spurious emissions in the classification and identification chain of mini-UAVs. It was already known that spurious signals are very weak, but the focus was on the question whether the emission pattern could be correlated to the behaviour of the UAV. In this paper experimental examples of spurious RF emission for different types of mini-UAVs and their correlation with the electronic circuits in the UAVs will be shown
Godolphin, W; Cameron, E C; Frohlich, J; Price, J D
1979-02-01
Patients on long-term hemodialysis via arteriovenous fistula received heparin when the fistula needle was inserted, before a sample of blood was obtained for chemical analysis. The resultant release of lipoprotein lipase activity in vivo and continued lipolytic activity in vitro sometimes produced sufficient free fatty acid to precipitate calcium soaps. The consequent spurious hypocalcemia was most frequently observed when the patients had chylomicronemia. This cause of apparent hypocalcemia was eliminated either by immediate analyses of the blood samples or by obtaining samples before systemic heparinization.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1990-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Low Magnetic Fields in White Dwarfs and their Direct Progenitors?
NASA Astrophysics Data System (ADS)
Jordan, S.; Bagnulo, S.; Landstreet, J.; Fossati, L.; Valyavin, G. G.; Monin, D.; Wade, G. A.; Werner, K.; O'Toole, S. J.
2013-01-01
We have carried out a re-analysis of polarimetric data of central stars of planetary nebulae, hot subdwarfs, and white dwarfs taken with FORS1 (FOcal Reducer and low dispersion Spectrograph) on the VLT (Very Large Telescope), and added a large number of new observations in order to increase the sample. A careful analysis of the observations using only one wavelength calibration for the polarimetrically analysed spectra and for all positions of the retarder plate of the spectrograph is crucial in order to avoid spurious signals. We find that the previous detections of magnetic fields in subdwarfs and central stars could not be confirmed while about 10% of the observed white dwarfs have magnetic fields at the kilogauss level.
Issues and strategies in the DNA identification of World Trade Center victims.
Brenner, C H; Weir, B S
2003-05-01
Identification of the nearly 3000 victims of the World Trade Center attack, represented by about 15,000 body parts, rests heavily on DNA. Reference DNA profiles are often from relatives rather than from the deceased themselves. With so large a set of victims, coincidental similarities between non-relatives abound. Therefore considerable care is necessary to succeed in correlating references with correct victims while avoiding spurious assignments. Typically multiple relatives are necessary to establish the identity of a victim. We describe a 3-stage paradigm--collapse, screen, test--to organize the work of sorting out the identities. Inter alia we present a simple and general formula for the likelihood ratio governing practically any potential relationship between two DNA profiles.
NASA Technical Reports Server (NTRS)
Rapp, R.
1999-01-01
An expansion of a function initially given in 1deg cells was carried out to degree 360 by using 30'cells whose value was initially assigned to be the value of the 1deg cell in which it fell. The evaluation of point values of the function from the degree 360 expansion revealed spurious patterns attributed to the coefficients from degree 181 to 360. Expansion of the original function in 1deg cells to degree 180 showed no problems in the point evaluation. Mean 1deg values computed from both degree 180 to 360 expansions showed close agreement with the original function. The artifacts could be removed if the 30' values were interpolated by spline procedures from adjacent I' cells. These results led to an examination of the gravity anomalies and geoid undulations from EGM96 in areas where I' values were "split up" to form 30'cells. The area considered was 75degS to 85degS, 100degE to 120degE where the split up cells were basically south of 81 degS. A small, latitude related, and possibly spurious effect might be detectable in anomaly variations in the region. These results suggest that point values of a function computed from a high degree expansion may have spurious signals unless the cell size is compatible with the maximum degree of expansion. The spurious signals could be eliminated by using a spline interpolation procedure to obtain the 30'values from the 1deg values.
Bandpass mismatch error for satellite CMB experiments I: estimating the spurious signal
NASA Astrophysics Data System (ADS)
Thuong Hoang, Duc; Patanchon, Guillaume; Bucher, Martin; Matsumura, Tomotake; Banerji, Ranajoy; Ishino, Hirokazu; Hazumi, Masashi; Delabrouille, Jacques
2017-12-01
Future Cosmic Microwave Background (CMB) satellite missions aim to use the B mode polarization to measure the tensor-to-scalar ratio r with a sensitivity σr lesssim 10-3. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent in CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious B mode signal. Because the expected primordial B mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring r. Using simulations we estimate the magnitude and angular spectrum of the spurious B mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces differences in the relative calibration of galactic emission components. We simulate this effect using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to r from the reionization bump on large angular scales (l < 10) is ≈ 10-3 assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the nonuniformity of the angular coverage in each pixel that results from the scan pattern.
NASA Astrophysics Data System (ADS)
Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud
2018-04-01
Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1991-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
Relationship between sampling volume of primary serum tubes and spurious hemolysis.
Lippi, Giuseppe; Musa, Roberta; Battistelli, Luisita; Cervellin, Gianfranco
2012-01-01
We planned a study to establish whether spurious hemolysis may be present in low volume tubes or partially filled tubes. Four serum tubes were collected in sequence from 20 healthy volunteers, i.e., 4.0 mL, 13 x 75 mm (discard tube), 6.0 mL, 13 x 100 mm half-filled, 4.0 mL, 13 x 75 mm full-draw and 6.0 mL, 13 x 100 mm full-draw. Serum was separated and immediately tested for hemolysis index (HI), potassium, aspartate aminotransferase (AST), and lactate dehydrogenase (LDH). The HI always remained below the limit of detection of the method (< 0.5 g/L) in all tubes. No statistically significant differences were recorded in any parameter except potassium, which increased by 0.10 mmol/L in 4 mL full-draw tubes. No clinically significant variation was however recorded in any tube. The results suggest that all types of tubes tested might be used interchangeably in term of risk of spurious hemolysis.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
Monochromatic radio frequency accelerating cavity
Giordano, S.
1984-02-09
A radio frequency resonant cavity having a fundamental resonant frequency and characterized by being free of spurious modes. A plurality of spaced electrically conductive bars are arranged in a generally cylindrical array within the cavity to define a chamber between the bars and an outer solid cylindrically shaped wall of the cavity. A first and second plurality of mode perturbing rods are mounted in two groups at determined random locations to extend radially and axially into the cavity thereby to perturb spurious modes and cause their fields to extend through passageways between the bars and into the chamber. At least one body of lossy material is disposed within the chamber to damp all spurious modes that do extend into the chamber thereby enabling the cavity to operate free of undesired spurious modes.
Monochromatic radio frequency accelerating cavity
Giordano, Salvatore
1985-01-01
A radio frequency resonant cavity having a fundamental resonant frequency and characterized by being free of spurious modes. A plurality of spaced electrically conductive bars are arranged in a generally cylindrical array within the cavity to define a chamber between the bars and an outer solid cylindrically shaped wall of the cavity. A first and second plurality of mode perturbing rods are mounted in two groups at determined random locations to extend radially and axially into the cavity thereby to perturb spurious modes and cause their fields to extend through passageways between the bars and into the chamber. At least one body of lossy material is disposed within the chamber to damp all spurious modes that do extend into the chamber thereby enabling the cavity to operate free of undesired spurious modes.
Pewarchuk, W; VanderBoom, J; Blajchman, M A
1992-01-01
A patient blood sample with an unexpectedly high hemoglobin level, high hematocrit, low white blood cell count, and low platelet count was recognized as being spurious based on previously available data. Repeated testing of the original sample showed a gradual return of all parameters to expected levels. We provide evidence that the overfilling of blood collection vacuum tubes can lead to inadequate sample mixing and that, in combination with the settling of the cellular contents in the collection tubes, can result in spuriously abnormal hematological parameters as estimated by an automated method.
The critical angle in seismic interferometry
Van Wijk, K.; Calvert, A.; Haney, M.; Mikesell, D.; Snieder, R.
2008-01-01
Limitations with respect to the characteristics and distribution of sources are inherent to any field seismic experiment, but in seismic interferometry these lead to spurious waves. Instead of trying to eliminate, filter or otherwise suppress spurious waves, crosscorrelation of receivers in a refraction experiment indicate we can take advantage of spurious events for near-surface parameter extraction for static corrections or near-surface imaging. We illustrate this with numerical examples and a field experiment from the CSM/Boise State University Geophysics Field Camp.
NASA Astrophysics Data System (ADS)
Yang, Xueming; Wu, Sihan; Xu, Jiangxin; Cao, Bingyang; To, Albert C.
2018-02-01
Although the AIREBO potential can well describe the mechanical and thermal transport of the carbon nanostructures under normal conditions, previous studies have shown that it may overestimate the simulated mechanical properties of carbon nanostructures in extreme strains near fracture. It is still unknown whether such overestimation would also appear in the thermal transport of nanostructrues. In this paper, the mechanical and thermal transport of graphene nanoribbon under extreme deformation conditions are studied by MD simulations using both the original and modified AIREBO potential. Results show that the cutoff function of the original AIREBO potential produces an overestimation on thermal conductivity in extreme strains near fracture stage. Spurious heat conduction behavior appears, e.g., the thermal conductivity of GNRs does not monotonically decrease with increasing strain, and even shows a ;V; shaped reversed and nonphysical trend. Phonon spectrum analysis show that it also results in an artificial blue shift of G peak and phonon stiffening of the optical phonon modes. The correlation between spurious heat conduction behavior and overestimation of mechanical properties near the fracture stage caused by the original AIREBO potential are explored and revealed.
Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.
Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan
2018-10-31
In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.
Spurious Excitations in Semiclassical Scattering Theory.
ERIC Educational Resources Information Center
Gross, D. H. E.; And Others
1980-01-01
Shows how through proper handling of the nonuniform motion of semiclassical coordinates spurious excitation terms are eliminated. An application to the problem of nuclear Coulomb excitation is presented as an example. (HM)
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; ...
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
EMI Standards for Wireless Voice and Data on Board Aircraft
NASA Technical Reports Server (NTRS)
Ely, Jay J.; Nguyen, Truong X.
2002-01-01
The use of portable electronic devices (PEDs) on board aircraft continues to be an increasing source of misunderstanding between passengers and flight-crews, and consequently, an issue of controversy between wireless product manufacturers and air transport regulatory authorities. This conflict arises primarily because of the vastly different regulatory objectives between commercial product and airborne equipment standards for avoiding electromagnetic interference (EMI). This paper summarizes international regulatory limits and test processes for measuring spurious radiated emissions from commercially available PEDs, and compares them to international standards for airborne equipment. The goal is to provide insight for wireless product developers desiring to extend the freedom of their customers to use wireless products on-board aircraft, and to identify future product characteristics, test methods and technologies that may facilitate improved wireless freedom for airline passengers.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
NASA Astrophysics Data System (ADS)
Pholele, T. M.; Chuma, J. M.
2016-03-01
The effects of conductor disc in a dielectric loaded combline resonator on its spurious performance, unloaded quality factor (Qu), and coupling coefficients are analysed using a commercial electromagnetic software package CST Microwave Studio (CST MWS). The disc improves the spurious free band but simultaneously deteriorates the Qu. The presence of the disc substantially improves the electric coupling by a factor of 1.891 for an aperture opening of 12 mm, while it has insignificant effect on the magnetic coupling.
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies
Kuo, Chia-Ling; Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease. PMID:25955023
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies.
Kuo, Chia-Ling; Vsevolozhskaya, Olga A; Zaykin, Dmitri V
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease.
García-González, Elena; González-Tarancón, Ricardo; Aramendía, Maite; Rello, Luis
2016-08-01
Monoclonal (M) components can interfere with the direct bilirubin (D-Bil) assay on the AU Beckman Coulter instrumentation and produce spurious results, such as D-Bil values greater than total bilirubin (T-Bil) or very low/negative D-Bil values. If properly detected, this interference may uncover undiagnosed patients with monoclonal gammopathy (MG). We investigated the interference rate on the D-Bil AU assay in serum samples known to contain M proteins along with their isotype and described the protocol set up in our laboratory to help with the diagnosis of MG based on D-Bil spurious results as first indication. During a period of 4 years, 15.4% (345 of 2235) of serum samples containing M immunoglobulins produced erroneous D-Bil results, although no clear relationship between the magnitude or isotype of the M component and interference could be found. In total 22 new patients were diagnosed with MG based on the analytical artefact with the D-Bil as first indication. The D-Bil interference from MG on the Beckman AU analysers needs to be made known to laboratories in order to prevent clinical confusion and/or additional workup to explain the origin of anomalous results. Although this information may not add to the management of existing patients with serum paraproteins, it can benefit patients that have not been diagnosed with MG by triggering follow up testing to determine if M components are present.
The role of spurious correlation in the development of a komatiite alteration model
NASA Astrophysics Data System (ADS)
Butler, John C.
1986-03-01
Current procedures for assessing the degree of alteration in komatiites stress the construction of variation diagrams in which ratios of molecular proportions of the oxides are the axes of reference. For example, it has been argued that unaltered komatiites related to each other by olivine fractionation will display a linear variation with a slope of 0.5 in the space defined by [SiO2/TiO2] and [(MgO+FeO)/TiO2]. Extensive metasomatism is expected to destroy such a consistent pattern. Previous workers have tended to make use of ratios that have a common denominator. It has been known for a long time that ratios formed from uncorrelated variables will be correlated (a so-called spurious correlation) if both ratios have a common denominator. The magnitude of this spurious correlation is a function of the coefficients of variation of the measured amounts of the variables. If the denominator component has a coefficient of variation that is larger than those of the numerator components, the spurious correlation will be close to unity; that is, there will be nearly a straight-line relationship. As a demonstration, a fictitious data set has been simulated so that the means and variances of SiO2, TiO2, and (MgO + FeO) match those of an observed data set but the components themselves are uncorrelated. A plot of (SiO2/TiO2) versus [(MgO + FeO)/TiO2] of these simulated data produces a distribution of points that appears every bit as convincing an illustration of the lack of significant metasomatism as does the plot of the observed data. The assessment of the strength of linear association is a test of the observed correlation against an expected value (the null value) of zero. When a spurious correlation arises as a result of the formulation of ratios with a common denominator, zero is clearly an inappropriate choice as the null. It can be argued that the spurious correlation is, in fact, a more suitable null value. An analysis of komatiites from Gorgona Island and the Barberton suite reveals that the strong linear association could have been produced by forming ratios from uncorrelated starting chemical components. Ratios without parts in common are to be preferred in the construction of petrogenetic models.
Defect imaging for plate-like structures using diffuse field.
Hayashi, Takahiro
2018-04-01
Defect imaging utilizing a scanning laser source (SLS) technique produces images of defects in a plate-like structure, as well as spurious images occurring because of resonances and reverberations within the specimen. This study developed defect imaging by the SLS using diffuse field concepts to reduce the intensity of spurious images, by which the energy of flexural waves excited by laser can be estimated. The experimental results in the different frequency bandwidths of excitation waves and in specimens with different attenuation proved that clearer images of defects are obtained in broadband excitation using a chirp wave and in specimens with low attenuation, which produce diffuse fields easily.
Portable Wireless Device Threat Assessment for Aircraft Navigation Radios
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This paper addresses the concern for Wireless Local Area Network devices and two-way radios to cause electromagnetic interference to aircraft navigation radio systems. Spurious radiated emissions from various IEEE 802.11a, 802.11b, and Bluetooth devices are characterized using reverberation chambers. The results are compared with baseline emissions from standard laptop computer and personal digital assistants (PDAs) that are currently allowed for use on aircraft. The results indicate that the WLAN devices tested are not more of a threat to aircraft navigation radios than standard laptop computers and PDAs in most aircraft bands. In addition, spurious radiated emission data from seven pairs of two-way radios are provided. These two-way radios emit at much higher levels in the bands considered. A description of the measurement process, device modes of operation and the measurement results are reported.
Correction of Population Stratification in Large Multi-Ethnic Association Studies
Serre, David; Montpetit, Alexandre; Paré, Guillaume; Engert, James C.; Yusuf, Salim; Keavney, Bernard; Hudson, Thomas J.; Anand, Sonia
2008-01-01
Background The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification. Methodology/Principal Findings We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification. Conclusions/Significance Our results highlight the importance of carefully addressing population stratification and of carefully “cleaning” the sample prior to analyses to obtain stronger signals of association and to avoid spurious results. PMID:18196181
Structural basis for microRNA targeting
Schirle, Nicole T.; Sheu-Gruttadauria, Jessica; MacRae, Ian J.
2014-10-31
MicroRNAs (miRNAs) control expression of thousands of genes in plants and animals. miRNAs function by guiding Argonaute proteins to complementary sites in messenger RNAs (mRNAs) targeted for repression. In this paper, we determined crystal structures of human Argonaute-2 (Ago2) bound to a defined guide RNA with and without target RNAs representing miRNA recognition sites. These structures suggest a stepwise mechanism, in which Ago2 primarily exposes guide nucleotides (nt) 2 to 5 for initial target pairing. Pairing to nt 2 to 5 promotes conformational changes that expose nt 2 to 8 and 13 to 16 for further target recognition. Interactions withmore » the guide-target minor groove allow Ago2 to interrogate target RNAs in a sequence-independent manner, whereas an adenosine binding-pocket opposite guide nt 1 further facilitates target recognition. Spurious slicing of miRNA targets is avoided through an inhibitory coordination of one catalytic magnesium ion. Finally, these results explain the conserved nucleotide-pairing patterns in animal miRNA target sites first observed over two decades ago.« less
Kwon, Kun-Sup; Yoon, Won-Sang
2010-01-01
In this paper we propose a method of removing from synthesizer output spurious signals due to quasi-amplitude modulation and superposition effect in a frequency-hopping synthesizer with direct digital frequency synthesizer (DDFS)-driven phase-locked loop (PLL) architecture, which has the advantages of high frequency resolution, fast transition time, and small size. There are spurious signals that depend on normalized frequency of DDFS. They can be dominant if they occur within the PLL loop bandwidth. We suggest that such signals can be eliminated by purposefully creating frequency errors in the developed synthesizer.
Coronal mass ejection and solar flare initiation processes without appreciable
NASA Astrophysics Data System (ADS)
Veselovsky, I.
TRACE and SOHO/EIT movies clearly show the cases of the coronal mass ejection and solar flare initiations without noticeable large-scale topology modifications in observed features. Instead of this, the appearance of new intermediate scales is often omnipresent in the erupting region structures when the overall configuration is preserved. Examples of this kind are presented and discussed in the light of the existing magnetic field reconnection paradigms. It is demonstrated that spurious large-scale reconnections and detachments are often produced due to the projection effects in poorly resolved images of twisted loops and sheared arcades especially when deformed parts of them are underexposed and not seen in the images only because of this reason. Other parts, which are normally exposed or overexposed, can make the illusion of "islands" or detached elements in these situations though in reality they preserve the initial magnetic connectivity. Spurious "islands" of this kind could be wrongly interpreted as signatures of topological transitions in the large-scale magnetic fields in many instances described in the vast literature in the past based mainly on fuzzy YOHKOH images, which resulted in the myth about universal solar flare models and the scenario of detached magnetic island formations with new null points in the large scale magnetic field. The better visualization with higher resolution and sensitivity limits allowed to clarify this confusion and to avoid this unjustified interpretation. It is concluded that topological changes obviously can happen in the coronal magnetic fields, but these changes are not always necessary ingredients at least of all coronal mass ejections and solar flares. The scenario of the magnetic field opening is not universal for all ejections. Otherwise, expanding ejections with closed magnetic configurations can be produced by the fast E cross B drifts in strong inductive electric fields, which appear due to the emergence of the new magnetic flux. Corresponding theoretical models are presented and discussed.
Specimen Holder for Analytical Electron Microscopes
NASA Technical Reports Server (NTRS)
Clanton, U. S.; Isaacs, A. M.; Mackinnon, I.
1985-01-01
Reduces spectral contamination by spurious X-ray. Specimen holder made of compressed carbon, securely retains standard electron microscope grid (disk) 3 mm in diameter and absorbs backscattered electrons that otherwise generate spurious X-rays. Since holder inexpensive, dedicated to single specimen when numerous samples examined.
NASA Astrophysics Data System (ADS)
Humeniuk, Alexander; Mitrić, Roland
2017-12-01
A software package, called DFTBaby, is published, which provides the electronic structure needed for running non-adiabatic molecular dynamics simulations at the level of tight-binding DFT. A long-range correction is incorporated to avoid spurious charge transfer states. Excited state energies, their analytic gradients and scalar non-adiabatic couplings are computed using tight-binding TD-DFT. These quantities are fed into a molecular dynamics code, which integrates Newton's equations of motion for the nuclei together with the electronic Schrödinger equation. Non-adiabatic effects are included by surface hopping. As an example, the program is applied to the optimization of excited states and non-adiabatic dynamics of polyfluorene. The python and Fortran source code is available at http://www.dftbaby.chemie.uni-wuerzburg.de.
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
Identification of Spurious Signals from Permeable Ffowcs Williams and Hawkings Surfaces
NASA Technical Reports Server (NTRS)
Lopes, Leonard V.; Boyd, David D., Jr.; Nark, Douglas M.; Wiedemann, Karl E.
2017-01-01
Integral forms of the permeable surface formulation of the Ffowcs Williams and Hawkings (FW-H) equation often require an input in the form of a near field Computational Fluid Dynamics (CFD) solution to predict noise in the near or far field from various types of geometries. The FW-H equation involves three source terms; two surface terms (monopole and dipole) and a volume term (quadrupole). Many solutions to the FW-H equation, such as several of Farassat's formulations, neglect the quadrupole term. Neglecting the quadrupole term in permeable surface formulations leads to inaccuracies called spurious signals. This paper explores the concept of spurious signals, explains how they are generated by specifying the acoustic and hydrodynamic surface properties individually, and provides methods to determine their presence, regardless of whether a correction algorithm is employed. A potential approach based on the equivalent sources method (ESM) and the sensitivity of Formulation 1A (Formulation S1A) is also discussed for the removal of spurious signals.
NASA Astrophysics Data System (ADS)
Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing
2018-03-01
Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlin, P W
1989-06-01
As part of US Department of Energy-sponsored research on wind energy, a Mod-O wind turbine was used to drive a variable-speed, wound-rotor, induction generator. Energy resulting from the slip frequency voltage in the generator rotor was rectified to dc, inverted back to utility frequency ac, and injected into the power line. Spurious changing frequencies displayed in the generator output by a spectrum analyzer are caused by ripple on the dc link. No resonances of any of these moving frequencies were seen in spite of the presence of a bank of power factor correcting capacitors. 5 figs.
Application of the algebraic RNG model for transition simulation. [renormalization group theory
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1990-01-01
The algebraic form of the RNG model of Yakhot and Orszag (1986) is investigated as a transition model for the Reynolds averaged boundary layer equations. It is found that the cubic equation for the eddy viscosity contains both a jump discontinuity and one spurious root. A yet unpublished transformation to a quartic equation is shown to remove the numerical difficulties associated with the discontinuity, but only at the expense of merging both the physical and spurious root of the cubic. Jumps between the branches of the resulting multiple-valued solution are found to lead to oscillations in flat plate transition calculations. Aside from the oscillations, the transition behavior is qualitatively correct.
Upwind schemes and bifurcating solutions in real gas computations
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
Mode suppression means for gyrotron cavities
Chodorow, Marvin; Symons, Robert S.
1983-08-09
In a gyrotron electron tube of the gyro-klystron or gyro-monotron type, having a cavity supporting an electromagnetic mode with circular electric field, spurious resonances can occur in modes having noncircular electric field. These spurious resonances are damped and their frequencies shifted by a circular groove in the cavity parallel to the electric field.
NASA Technical Reports Server (NTRS)
Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)
2000-01-01
The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.
Electromagnetic Design of a Magnetically Coupled Spatial Power Combiner
NASA Astrophysics Data System (ADS)
Bulcha, B. T.; Cataldo, G.; Stevenson, T. R.; U-Yen, K.; Moseley, S. H.; Wollack, E. J.
2018-04-01
The design of a two-dimensional spatial beam-combining network employing a parallel-plate superconducting waveguide filled with a monocrystalline silicon dielectric substrate is presented. This component uses arrays of magnetically coupled antenna elements to achieve high coupling efficiency and full sampling of the intensity distribution while avoiding diffractive losses in the multimode waveguide region. These attributes enable the structure's use in realizing compact far-infrared spectrometers for astrophysical and instrumentation applications. If unterminated, reflections within a finite-sized spatial beam combiner can potentially lead to spurious couplings between elements. A planar meta-material electromagnetic absorber is implemented to control this response within the device. This broadband termination absorbs greater than 0.99 of the power over the 1.7:1 operational band at angles ranging from normal to near-parallel incidence. The design approach, simulations and applications of the spatial power combiner and meta-material termination structure are presented.
NASA Astrophysics Data System (ADS)
Panetsos, Fivos; Sanchez-Jimenez, Abel; Torets, Carlos; Largo, Carla; Micera, Silvestro
2011-08-01
In this work we address the use of realtime cortical recordings for the generation of coherent, reliable and robust motor activity in spinal-lesioned animals through selective intraspinal microstimulation (ISMS). The spinal cord of adult rats was hemisectioned and groups of multielectrodes were implanted in both the central nervous system (CNS) and the spinal cord below the lesion level to establish a neural system interface (NSI). To test the reliability of this new NSI connection, highly repeatable neural responses recorded from the CNS were used as a pattern generator of an open-loop control strategy for selective ISMS of the spinal motoneurons. Our experimental procedure avoided the spontaneous non-controlled and non-repeatable neural activity that could have generated spurious ISMS and the consequent undesired muscle contractions. Combinations of complex CNS patterns generated precisely coordinated, reliable and robust motor actions.
Hartle, A; McCormack, T; Carlisle, J; Anderson, S; Pichel, A; Beckett, N; Woodcock, T; Heagerty, A
2016-03-01
This guideline aims to ensure that patients admitted to hospital for elective surgery are known to have blood pressures below 160 mmHg systolic and 100 mmHg diastolic in primary care. The objective for primary care is to fulfil this criterion before referral to secondary care for elective surgery. The objective for secondary care is to avoid spurious hypertensive measurements. Secondary care should not attempt to diagnose hypertension in patients who are normotensive in primary care. Patients who present to pre-operative assessment clinics without documented primary care blood pressures should proceed to elective surgery if clinic blood pressures are below 180 mmHg systolic and 110 mmHg diastolic. © 2016 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists of Great Britain and Ireland.
A conservative, relativistic Fokker-Planck solver for runaway electrons
NASA Astrophysics Data System (ADS)
Chacon, Luis; Taitano, W.; Tang, X.; Guo, Z.; McDevitt, C.
2017-10-01
Relativistic runaway electrons develop when electric fields surpass a critical electric field, Ec =ED
Aldhous, Marian C; Abu Bakar, Suhaili; Prescott, Natalie J; Palla, Raquel; Soo, Kimberley; Mansfield, John C; Mathew, Christopher G; Satsangi, Jack; Armour, John A L
2010-12-15
The copy number variation in beta-defensin genes on human chromosome 8 has been proposed to underlie susceptibility to inflammatory disorders, but presents considerable challenges for accurate typing on the scale required for adequately powered case-control studies. In this work, we have used accurate methods of copy number typing based on the paralogue ratio test (PRT) to assess beta-defensin copy number in more than 1500 UK DNA samples including more than 1000 cases of Crohn's disease. A subset of 625 samples was typed using both PRT-based methods and standard real-time PCR methods, from which direct comparisons highlight potentially serious shortcomings of a real-time PCR assay for typing this variant. Comparing our PRT-based results with two previous studies based only on real-time PCR, we find no evidence to support the reported association of Crohn's disease with either low or high beta-defensin copy number; furthermore, it is noteworthy that there are disagreements between different studies on the observed frequency distribution of copy number states among European controls. We suggest safeguards to be adopted in assessing and reporting the accuracy of copy number measurement, with particular emphasis on integer clustering of results, to avoid reporting of spurious associations in future case-control studies.
Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2017-11-01
Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Can the common brain parasite, Toxoplasma gondii, influence human culture?
Lafferty, Kevin D
2006-01-01
The latent prevalence of a long-lived and common brain parasite, Toxoplasma gondii, explains a statistically significant portion of the variance in aggregate neuroticism among populations, as well as in the ‘neurotic’ cultural dimensions of sex roles and uncertainty avoidance. Spurious or non-causal correlations between aggregate personality and aspects of climate and culture that influence T. gondii transmission could also drive these patterns. A link between culture and T. gondii hypothetically results from a behavioural manipulation that the parasite uses to increase its transmission to the next host in the life cycle: a cat. While latent toxoplasmosis is usually benign, the parasite's subtle effect on individual personality appears to alter the aggregate personality at the population level. Drivers of the geographical variation in the prevalence of this parasite include the effects of climate on the persistence of infectious stages in soil, the cultural practices of food preparation and cats as pets. Some variation in culture, therefore, may ultimately be related to how climate affects the distribution of T. gondii, though the results only explain a fraction of the variation in two of the four cultural dimensions, suggesting that if T. gondii does influence human culture, it is only one among many factors. PMID:17015323
Bernstein modes in a non-neutral plasma column
NASA Astrophysics Data System (ADS)
Walsh, Daniel; Dubin, Daniel H. E.
2018-05-01
This paper presents theory and numerical calculations of electrostatic Bernstein modes in an inhomogeneous cylindrical plasma column. These modes rely on finite Larmor radius effects to propagate radially across the column until they are reflected when their frequency matches the upper hybrid frequency. This reflection sets up an internal normal mode on the column and also mode-couples to the electrostatic surface cyclotron wave (which allows the normal mode to be excited and observed using external electrodes). Numerical results predicting the mode spectra, using a novel linear Vlasov code on a cylindrical grid, are presented and compared to an analytical Wentzel Kramers Brillouin (WKB) theory. A previous version of the theory [D. H. E. Dubin, Phys. Plasmas 20(4), 042120 (2013)] expanded the plasma response in powers of 1/B, approximating the local upper hybrid frequency, and consequently, its frequency predictions are spuriously shifted with respect to the numerical results presented here. A new version of the WKB theory avoids this approximation using the exact cold fluid plasma response and does a better job of reproducing the numerical frequency spectrum. The effect of multiple ion species on the mode spectrum is also considered, to make contact with experiments that observe cyclotron modes in a multi-species pure ion plasma [M. Affolter et al., Phys. Plasmas 22(5), 055701 (2015)].
Aldhous, Marian C.; Abu Bakar, Suhaili; Prescott, Natalie J.; Palla, Raquel; Soo, Kimberley; Mansfield, John C.; Mathew, Christopher G.; Satsangi, Jack; Armour, John A.L.
2010-01-01
The copy number variation in beta-defensin genes on human chromosome 8 has been proposed to underlie susceptibility to inflammatory disorders, but presents considerable challenges for accurate typing on the scale required for adequately powered case–control studies. In this work, we have used accurate methods of copy number typing based on the paralogue ratio test (PRT) to assess beta-defensin copy number in more than 1500 UK DNA samples including more than 1000 cases of Crohn's disease. A subset of 625 samples was typed using both PRT-based methods and standard real-time PCR methods, from which direct comparisons highlight potentially serious shortcomings of a real-time PCR assay for typing this variant. Comparing our PRT-based results with two previous studies based only on real-time PCR, we find no evidence to support the reported association of Crohn's disease with either low or high beta-defensin copy number; furthermore, it is noteworthy that there are disagreements between different studies on the observed frequency distribution of copy number states among European controls. We suggest safeguards to be adopted in assessing and reporting the accuracy of copy number measurement, with particular emphasis on integer clustering of results, to avoid reporting of spurious associations in future case–control studies. PMID:20858604
NASA Astrophysics Data System (ADS)
Ye, J.; Shi, J.; De Hoop, M. V.
2017-12-01
We develop a robust algorithm to compute seismic normal modes in a spherically symmetric, non-rotating Earth. A well-known problem is the cross-contamination of modes near "intersections" of dispersion curves for separate waveguides. Our novel computational approach completely avoids artificial degeneracies by guaranteeing orthonormality among the eigenfunctions. We extend Wiggins' and Buland's work, and reformulate the Sturm-Liouville problem as a generalized eigenvalue problem with the Rayleigh-Ritz Galerkin method. A special projection operator incorporating the gravity terms proposed by de Hoop and a displacement/pressure formulation are utilized in the fluid outer core to project out the essential spectrum. Moreover, the weak variational form enables us to achieve high accuracy across the solid-fluid boundary, especially for Stoneley modes, which have exponentially decaying behavior. We also employ the mixed finite element technique to avoid spurious pressure modes arising from discretization schemes and a numerical inf-sup test is performed following Bathe's work. In addition, the self-gravitation terms are reformulated to avoid computations outside the Earth, thanks to the domain decomposition technique. Our package enables us to study the physical properties of intersection points of waveguides. According to Okal's classification theory, the group velocities should be continuous within a branch of the same mode family. However, we have found that there will be a small "bump" near intersection points, which is consistent with Miropol'sky's observation. In fact, we can loosely regard Earth's surface and the CMB as independent waveguides. For those modes that are far from the intersection points, their eigenfunctions are localized in the corresponding waveguides. However, those that are close to intersection points will have physical features of both waveguides, which means they cannot be classified in either family. Our results improve on Okal's classification, demonstrating that dispersion curves from independent waveguides should be considered to break at intersection points.
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
2015-01-01
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods. PMID:25938136
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules.
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods.
Segmentation of time series with long-range fractal correlations.
Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P
2012-06-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.
Roles of antinucleon degrees of freedom in the relativistic random phase approximation
NASA Astrophysics Data System (ADS)
Kurasawa, Haruki; Suzuki, Toshio
2015-11-01
The roles of antinucleon degrees of freedom in the relativistic random phase approximation (RPA) are investigated. The energy-weighted sum of the RPA transition strengths is expressed in terms of the double commutator between the excitation operator and the Hamiltonian, as in nonrelativistic models. The commutator, however, should not be calculated in the usual way in the local field theory, because, otherwise, the sum vanishes. The sum value obtained correctly from the commutator is infinite, owing to the Dirac sea. Most of the previous calculations take into account only some of the nucleon-antinucleon states, in order to avoid divergence problems. As a result, RPA states with negative excitation energy appear, which make the sum value vanish. Moreover, disregarding the divergence changes the sign of nuclear interactions in the RPA equation that describes the coupling of the nucleon particle-hole states with the nucleon-antinucleon states. Indeed, the excitation energies of the spurious state and giant monopole states in the no-sea approximation are dominated by these unphysical changes. The baryon current conservation can be described without touching the divergence problems. A schematic model with separable interactions is presented, which makes the structure of the relativistic RPA transparent.
A well-posed optimal spectral element approximation for the Stokes problem
NASA Technical Reports Server (NTRS)
Maday, Y.; Patera, A. T.; Ronquist, E. M.
1987-01-01
A method is proposed for the spectral element simulation of incompressible flow. This method constitutes in a well-posed optimal approximation of the steady Stokes problem with no spurious modes in the pressure. The resulting method is analyzed, and numerical results are presented for a model problem.
Portable Wireless LAN Device and Two-way Radio Threat Assessment for Aircraft Navigation Radios
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2003-01-01
Measurement processes, data and analysis are provided to address the concern for Wireless Local Area Network devices and two-way radios to cause electromagnetic interference to aircraft navigation radio systems. A radiated emission measurement process is developed and spurious radiated emissions from various devices are characterized using reverberation chambers. Spurious radiated emissions in aircraft radio frequency bands from several wireless network devices are compared with baseline emissions from standard computer laptops and personal digital assistants. In addition, spurious radiated emission data in aircraft radio frequency bands from seven pairs of two-way radios are provided, A description of the measurement process, device modes of operation and the measurement results are reported. Aircraft interference path loss measurements were conducted on four Boeing 747 and Boeing 737 aircraft for several aircraft radio systems. The measurement approach is described and the path loss results are compared with existing data from reference documents, standards, and NASA partnerships. In-band on-channel interference thresholds are compiled from an existing reference document. Using these data, a risk assessment is provided for interference from wireless network devices and two-way radios to aircraft systems, including Localizer, Glideslope, Very High Frequency Omnidirectional Range, Microwave Landing System and Global Positioning System. The report compares the interference risks associated with emissions from wireless network devices and two-way radios against standard laptops and personal digital assistants. Existing receiver interference threshold references are identified as to require more data for better interference risk assessments.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Some effects of horizontal discretization on linear baroclinic and symmetric instabilities
NASA Astrophysics Data System (ADS)
Barham, William; Bachman, Scott; Grooms, Ian
2018-05-01
The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.
Smoothed-particle-hydrodynamics modeling of dissipation mechanisms in gravity waves.
Colagrossi, Andrea; Souto-Iglesias, Antonio; Antuono, Matteo; Marrone, Salvatore
2013-02-01
The smoothed-particle-hydrodynamics (SPH) method has been used to study the evolution of free-surface Newtonian viscous flows specifically focusing on dissipation mechanisms in gravity waves. The numerical results have been compared with an analytical solution of the linearized Navier-Stokes equations for Reynolds numbers in the range 50-5000. We found that a correct choice of the number of neighboring particles is of fundamental importance in order to obtain convergence towards the analytical solution. This number has to increase with higher Reynolds numbers in order to prevent the onset of spurious vorticity inside the bulk of the fluid, leading to an unphysical overdamping of the wave amplitude. This generation of spurious vorticity strongly depends on the specific kernel function used in the SPH model.
A comprehensive comparison of network similarities for link prediction and spurious link elimination
NASA Astrophysics Data System (ADS)
Zhang, Peng; Qiu, Dan; Zeng, An; Xiao, Jinghua
2018-06-01
Identifying missing interactions in complex networks, known as link prediction, is realized by estimating the likelihood of the existence of a link between two nodes according to the observed links and nodes' attributes. Similar approaches have also been employed to identify and remove spurious links in networks which is crucial for improving the reliability of network data. In network science, the likelihood for two nodes having a connection strongly depends on their structural similarity. The key to address these two problems thus becomes how to objectively measure the similarity between nodes in networks. In the literature, numerous network similarity metrics have been proposed and their accuracy has been discussed independently in previous works. In this paper, we systematically compare the accuracy of 18 similarity metrics in both link prediction and spurious link elimination when the observed networks are very sparse or consist of inaccurate linking information. Interestingly, some methods have high prediction accuracy, they tend to perform low accuracy in identification spurious interaction. We further find that methods can be classified into several cluster according to their behaviors. This work is useful for guiding future use of these similarity metrics for different purposes.
Beltman, Joost B; Urbanus, Jos; Velds, Arno; van Rooij, Nienke; Rohr, Jan C; Naik, Shalin H; Schumacher, Ton N
2016-04-02
Next generation sequencing (NGS) of amplified DNA is a powerful tool to describe genetic heterogeneity within cell populations that can both be used to investigate the clonal structure of cell populations and to perform genetic lineage tracing. For applications in which both abundant and rare sequences are biologically relevant, the relatively high error rate of NGS techniques complicates data analysis, as it is difficult to distinguish rare true sequences from spurious sequences that are generated by PCR or sequencing errors. This issue, for instance, applies to cellular barcoding strategies that aim to follow the amount and type of offspring of single cells, by supplying these with unique heritable DNA tags. Here, we use genetic barcoding data from the Illumina HiSeq platform to show that straightforward read threshold-based filtering of data is typically insufficient to filter out spurious barcodes. Importantly, we demonstrate that specific sequencing errors occur at an approximately constant rate across different samples that are sequenced in parallel. We exploit this observation by developing a novel approach to filter out spurious sequences. Application of our new method demonstrates its value in the identification of true sequences amongst spurious sequences in biological data sets.
Komsa, Darya N; Staroverov, Viktor N
2016-11-08
Standard density-functional approximations often incorrectly predict that heteronuclear diatomic molecules dissociate into fractionally charged atoms. We demonstrate that these spurious charges can be eliminated by adapting the shape-correction method for Kohn-Sham potentials that was originally introduced to improve Rydberg excitation energies [ Phys. Rev. Lett. 2012 , 108 , 253005 ]. Specifically, we show that if a suitably determined fraction of electron charge is added to or removed from a frontier Kohn-Sham orbital level, the approximate Kohn-Sham potential of a stretched molecule self-corrects by developing a semblance of step structure; if this potential is used to obtain the electron density of the neutral molecule, charge delocalization is blocked and spurious fractional charges disappear beyond a certain internuclear distance.
Predicting missing links and identifying spurious links via likelihood analysis
NASA Astrophysics Data System (ADS)
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-03-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms.
Predicting missing links and identifying spurious links via likelihood analysis
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-01-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms. PMID:26961965
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.
Yang, Ye; Christensen, Ole F; Sorensen, Daniel
2011-02-01
Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.
Procop, Gary W; Taege, Alan J; Starkey, Colleen; Tungsiripat, Marisa; Warner, Diane; Schold, Jesse D; Yen-Lieberman, Belinda
2017-09-01
The processing of specimens often occurs in a central processing area within laboratories. We demonstrated that plasma centrifuged in the central laboratory but allowed to remain within the primary tube following centrifugation was associated with spuriously elevated HIV viral loads compared with recentrifugation of the plasma just prior to testing. Copyright © 2016 Elsevier Inc. All rights reserved.
The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.
ERIC Educational Resources Information Center
O'Reilly, James E.
1986-01-01
Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)
The origin of spurious solutions in computational electromagnetics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Wu, Jie; Povinelli, L. A.
1995-01-01
The origin of spurious solutions in computational electromagnetics, which violate the divergence equations, is deeply rooted in a misconception about the first-order Maxwell's equations and in an incorrect derivation and use of the curl-curl equations. The divergence equations must be always included in the first-order Maxwell's equations to maintain the ellipticity of the system in the space domain and to guarantee the uniqueness of the solution and/or the accuracy of the numerical solutions. The div-curl method and the least-squares method provide rigorous derivation of the equivalent second-order Maxwell's equations and their boundary conditions. The node-based least-squares finite element method (LSFEM) is recommended for solving the first-order full Maxwell equations directly. Examples of the numerical solutions by LSFEM for time-harmonic problems are given to demonstrate that the LSFEM is free of spurious solutions.
Quantifying spatial distribution of spurious mixing in ocean models.
Ilıcak, Mehmet
2016-12-01
Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.
NASA Astrophysics Data System (ADS)
Katori, Makoto
1988-12-01
A new scheme of the coherent-anomaly method (CAM) is proposed to study critical phenomena in the models for which a mean-field description gives spurious first-order phase transition. A canonical series of mean-field-type approximations are constructed so that the spurious discontinuity should vanish asymptotically as the approximate critical temperature approachs the true value. The true value of the critical exponents β and γ are related to the coherent-anomaly exponents defined among the classical approximations. The formulation is demonstrated in the two-dimensional q-state Potts models for q{=}3 and 4. The result shows that the present method enables us to estimate the critical exponents with high accuracy by using the date of the cluster-mean-field approximations.
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.
The Human Transcript Database: A Catalogue of Full Length cDNA Inserts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouckk John; Michael McLeod; Kim Worley
1999-09-10
The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less
Tracing a roadmap for vitamin B₁₂ testing using the health technology assessment approach.
Ferraro, Simona; Mozzi, Roberta; Panteghini, Mauro
2014-06-01
In our hospital, we are currently working to manage the appropriateness of vitamin B₁₂ (B12) testing. Unfortunately, the classic evidence-based approach is unhelpful in this process and meta-analyzing data on the accuracy of this marker for cobalamin deficiency detection is misleading due to the lack of reference diagnostic methods. The approach currently proposed by the Health Technology Assessment (HTA) enables us to tackle the issue of B₁₂ requests as a "healthcare" problem by considering the position of stakeholders involved in ordering, performing, interpreting the test, and receiving its results. Clinical expectations, methodological issues, and ethical aspects concerning the performance of the test can aid us in providing more guidance on the use of this marker. By building such structured information, hemodialysis patients and pregnant women have emerged as those groups preferentially requiring B₁₂ testing, as it may potentially improve the clinical outcome. To avoid misinterpretation of B₁₂ results more care should be taken in considering its biochemical and biological features, as well as the analytical issues. Spurious values obtained by current automated immunoassays may reflect suboptimal pre-analytical steps as well as known interfering conditions. Furthermore, the harmonization of results by available methods is still a far-reaching goal and the approach to interpret an individual's results should be improved. Tracing a roadmap for B₁₂ testing by exploiting the HTA model to balance the stakeholders' claims and maximizing the patient's outcome may help to manage the marker demand.
Utilizing soil polypedons to improve model performance for digital soil mapping
USDA-ARS?s Scientific Manuscript database
Most digital soil mapping approaches that use point data to develop relationships with covariate data intersect sample locations with one raster pixel regardless of pixel size. Resulting models are subject to spurious values in covariate data which may limit model performance. An alternative approac...
Digital Correlation In Laser-Speckle Velocimetry
NASA Technical Reports Server (NTRS)
Gilbert, John A.; Mathys, Donald R.
1992-01-01
Periodic recording helps to eliminate spurious results. Improved digital-correlation process extracts velocity field of two-dimensional flow from laser-speckle images of seed particles distributed sparsely in flow. Method which involves digital correlation of images recorded at unequal intervals, completely automated and has potential to be fastest yet.
Microresonator electrode design
Olsson, III, Roy H.; Wojciechowski, Kenneth; Branch, Darren W.
2016-05-10
A microresonator with an input electrode and an output electrode patterned thereon is described. The input electrode includes a series of stubs that are configured to isolate acoustic waves, such that the waves are not reflected into the microresonator. Such design results in reduction of spurious modes corresponding to the microresonator.
Ordered delinquency: the "effects" of birth order on delinquency.
Cundiff, Patrick R
2013-08-01
Juvenile delinquency has long been associated with birth order in popular culture. While images of the middle child acting out for attention or the rebellious youngest child readily spring to mind, little research has attempted to explain why. Drawing from Adlerian birth order theory and Sulloway's born-to-rebel hypothesis, I examine the relationship between birth order and a variety of delinquent outcomes during adolescence. Following some recent research on birth order and intelligence, I use new methods that allow for the examination of between-individual and within-family differences to better address the potential spurious relationship. My findings suggest that contrary to popular belief, the relationship between birth order and delinquency is spurious. Specifically, I find that birth order effects on delinquency are spurious and largely products of the analytic methods used in previous tests of the relationship. The implications of this finding are discussed.
The consentaneous model of the financial markets exhibiting spurious nature of long-range memory
NASA Astrophysics Data System (ADS)
Gontis, V.; Kononovicius, A.
2018-09-01
It is widely accepted that there is strong persistence in the volatility of financial time series. The origin of the observed persistence, or long-range memory, is still an open problem as the observed phenomenon could be a spurious effect. Earlier we have proposed the consentaneous model of the financial markets based on the non-linear stochastic differential equations. The consentaneous model successfully reproduces empirical probability and power spectral densities of volatility. This approach is qualitatively different from models built using fractional Brownian motion. In this contribution we investigate burst and inter-burst duration statistics of volatility in the financial markets employing the consentaneous model. Our analysis provides an evidence that empirical statistical properties of burst and inter-burst duration can be explained by non-linear stochastic differential equations driving the volatility in the financial markets. This serves as an strong argument that long-range memory in finance can have spurious nature.
Ordered Delinquency: The “Effects” of Birth Order On Delinquency
Cundiff, Patrick R.
2014-01-01
Juvenile delinquency has long been associated with birth order in popular culture. While images of the middle child acting out for attention or the rebellious youngest child readily spring to mind, little research has attempted to explain why. Drawing from Adlerian birth order theory and Sulloway's born to rebel hypothesis I examine the relationship between birth order and a variety of delinquent outcomes during adolescence. Following some recent research on birth order and intelligence, I use new methods that allow for the examination of both between-individual and within-family differences to better address the potential spurious relationship. My findings suggest that contrary to popular belief the relationship between birth order and delinquency is spurious. Specifically, I find that birth order effects on delinquency are spurious and largely products of the analytic methods used in previous tests of the relationship. The implications of this finding are discussed. PMID:23719623
Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers
NASA Astrophysics Data System (ADS)
Febres, Mijail; Legendre, Dominique
2018-04-01
The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.
Richter, S. Helene; Garner, Joseph P.; Zipser, Benjamin; Lewejohann, Lars; Sachser, Norbert; Touma, Chadi; Schindler, Britta; Chourbaji, Sabine; Brandwein, Christiane; Gass, Peter; van Stipdonk, Niek; van der Harst, Johanneke; Spruijt, Berry; Võikar, Vootele; Wolfer, David P.; Würbel, Hanno
2011-01-01
In animal experiments, animals, husbandry and test procedures are traditionally standardized to maximize test sensitivity and minimize animal use, assuming that this will also guarantee reproducibility. However, by reducing within-experiment variation, standardization may limit inference to the specific experimental conditions. Indeed, we have recently shown in mice that standardization may generate spurious results in behavioral tests, accounting for poor reproducibility, and that this can be avoided by population heterogenization through systematic variation of experimental conditions. Here, we examined whether a simple form of heterogenization effectively improves reproducibility of test results in a multi-laboratory situation. Each of six laboratories independently ordered 64 female mice of two inbred strains (C57BL/6NCrl, DBA/2NCrl) and examined them for strain differences in five commonly used behavioral tests under two different experimental designs. In the standardized design, experimental conditions were standardized as much as possible in each laboratory, while they were systematically varied with respect to the animals' test age and cage enrichment in the heterogenized design. Although heterogenization tended to improve reproducibility by increasing within-experiment variation relative to between-experiment variation, the effect was too weak to account for the large variation between laboratories. However, our findings confirm the potential of systematic heterogenization for improving reproducibility of animal experiments and highlight the need for effective and practicable heterogenization strategies. PMID:21305027
A critique of the cross-lagged panel model.
Hamaker, Ellen L; Kuiper, Rebecca M; Grasman, Raoul P P P
2015-03-01
The cross-lagged panel model (CLPM) is believed by many to overcome the problems associated with the use of cross-lagged correlations as a way to study causal influences in longitudinal panel data. The current article, however, shows that if stability of constructs is to some extent of a trait-like, time-invariant nature, the autoregressive relationships of the CLPM fail to adequately account for this. As a result, the lagged parameters that are obtained with the CLPM do not represent the actual within-person relationships over time, and this may lead to erroneous conclusions regarding the presence, predominance, and sign of causal influences. In this article we present an alternative model that separates the within-person process from stable between-person differences through the inclusion of random intercepts, and we discuss how this model is related to existing structural equation models that include cross-lagged relationships. We derive the analytical relationship between the cross-lagged parameters from the CLPM and the alternative model, and use simulations to demonstrate the spurious results that may arise when using the CLPM to analyze data that include stable, trait-like individual differences. We also present a modeling strategy to avoid this pitfall and illustrate this using an empirical data set. The implications for both existing and future cross-lagged panel research are discussed. (c) 2015 APA, all rights reserved).
College football, elections, and false-positive results in observational research.
Fowler, Anthony; Montagnes, B Pablo
2015-11-10
A recent, widely cited study [Healy AJ, Malhotra N, Mo CH (2010) Proc Natl Acad Sci USA 107(29):12804-12809] finds that college football games influence voting behavior. Victories within 2 weeks of an election reportedly increase the success of the incumbent party in presidential, senatorial, and gubernatorial elections in the home county of the team. We reassess the evidence and conclude that there is likely no such effect, despite the fact that Healy et al. followed the best practices in social science and used a credible research design. Multiple independent sources of evidence suggest that the original finding was spurious-reflecting bad luck for researchers rather than a shortcoming of American voters. We fail to estimate the same effect when we leverage situations where multiple elections with differing incumbent parties occur in the same county and year. We also find that the purported effect of college football games is stronger in counties where people are less interested in college football, just as strong when the incumbent candidate does not run for reelection, and just as strong in other parts of the state outside the home county of the team. Lastly, we detect no effect of National Football League games on elections, despite their greater popularity. We conclude with recommendations for evaluating surprising research findings and avoiding similar false-positive results.
Aqueous phase hydration and hydrate acidity of perfluoroalkyl and n:2 fluorotelomer aldehydes.
Rayne, Sierra; Forest, Kaya
2016-01-01
The SPARC software program and comparative density functional theory (DFT) calculations were used to investigate the aqueous phase hydration equilibrium constants (Khyd) of perfluoroalkyl aldehydes (PFAlds) and n:2 fluorotelomer aldehydes (FTAlds). Both classes are degradation products of known industrial compounds and environmental contaminants such as fluorotelomer alcohols, iodides, acrylates, phosphate esters, and other derivatives, as well as hydrofluorocarbons and hydrochlorofluorocarbons. Prior studies have generally failed to consider the hydration, and subsequent potential hydrate acidity, of these compounds, resulting in incomplete and erroneous predictions as to their environmental behavior. In the current work, DFT calculations suggest that all PFAlds will be dominantly present as the hydrated form in aqueous solution. Both SPARC and DFT calculations suggest that FTAlds will not likely be substantially hydrated in aquatic systems or in vivo. PFAld hydrates are expected to have pKa values in the range of phenols (ca. 9 to 10), whereas n:2 FTAld hydrates are expected to have pKa values ca. 2 to 3 units higher (ca. 12 to 13). In order to avoid spurious modeling predictions and a fundamental misunderstanding of their fate, the molecular and/or dissociated hydrate forms of PFAlds and FTAlds need to be explicitly considered in environmental, toxicological, and waste treatment investigations. The results of the current study will facilitate a more complete examination of the environmental fate of PFAlds and FTAlds.
Identification of true EST alignments for recognising transcribed regions.
Ma, Chuang; Wang, Jia; Li, Lun; Duan, Mo-Jie; Zhou, Yan-Hong
2011-01-01
Transcribed regions can be determined by aligning Expressed Sequence Tags (ESTs) with genome sequences. The kernel of this strategy is to effectively distinguish true EST alignments from spurious ones. In this study, three measures including Direction Check, Identity Check and Terminal Check were introduced to more effectively eliminate spurious EST alignments. On the basis of these introduced measures and other widely used measures, a computational tool, named ESTCleanser, has been developed to identify true EST alignments for obtaining reliable transcribed regions. The performance of ESTCleanser has been evaluated on the well-annotated human ENCyclopedia of DNA Elements (ENCODE) regions using human ESTs in the dbEST database. The evaluation results show that the accuracy of ESTCleanser at exon and intron levels is more remarkably enhanced than that of UCSC-spliced EST alignments. This work would be helpful to EST-based researches on finding new genes, complementing genome annotation, recognising alternative splicing events and Single Nucleotide Polymorphisms (SNPs), etc.
Opportunities for shear energy scaling in bulk acoustic wave resonators.
Jose, Sumy; Hueting, Raymond J E
2014-10-01
An important energy loss contribution in bulk acoustic wave resonators is formed by so-called shear waves, which are transversal waves that propagate vertically through the devices with a horizontal motion. In this work, we report for the first time scaling of the shear-confined spots, i.e., spots containing a high concentration of shear wave displacement, controlled by the frame region width at the edge of the resonator. We also demonstrate a novel methodology to arrive at an optimum frame region width for spurious mode suppression and shear wave confinement. This methodology makes use of dispersion curves obtained from finite-element method (FEM) eigenfrequency simulations for arriving at an optimum frame region width. The frame region optimization is demonstrated for solidly mounted resonators employing several shear wave optimized reflector stacks. Finally, the FEM simulation results are compared with measurements for resonators with Ta2O5/ SiO2 stacks showing suppression of the spurious modes.
A cost-effective strategy for nonoscillatory convection without clipping
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1990-01-01
Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.
Multitip scanning bio-Kelvin probe
NASA Astrophysics Data System (ADS)
Baikie, I. D.; Smith, P. J. S.; Porterfield, D. M.; Estrup, P. J.
1999-03-01
We have developed a novel multitip scanning Kelvin probe which can measure changes in biological surface potential ΔVs to within 2 mV and, quasisimultaneously monitor displacement to <1 μm. The control and measurement subcomponents are PC based and incorporate a flexible user interface permitting software control of each individual tip, measurement, and scan parameters. We review the mode of operation and design features of the scanning bio-Kelvin probe including tip steering, signal processing, tip calibration, and novel tip tracking/dithering routines. This system uniquely offers both tip-to-sample spacing control (which is essential to avoid spurious changes in ΔVs due to variations in mean spacing) and a dithering routine to maintain tip orientation to the biological specimen, irrespective of the latter's movement. These features permit long term (>48 h) "active" tracking of the displacement and biopotentials developed along and around a plant shoot in response to an environmental stimulus, e.g., differential illumination (phototropism) or changes in orientation (gravitropism).
Analyses of transients for an 800 MW-class accelerator driven transmuter with fertile-free fuels
NASA Astrophysics Data System (ADS)
Maschek, Werner; Suzuki, Tohru; Chen, Xue-Nong; Rineiski, Andrei; Matzerath Boccaccini, Claudia; Mori, Magnus; Morita, Koji
2006-06-01
In the FUTURE Program, the development and application of fertile-free fuels for Accelerator Driven Transmuters (ADTs) has been advanced. To assess the reactor performance and safety behavior of an ADT with so-called dedicated fuels, various transient cases for an 800 MW-class Pb/Bi-cooled ADT were investigated using the SIMMER-III code. The FUTURE ADT also served as vehicle to develop and test ideas on a safety concept for such transmuters. After an extensive ranking procedure, a CERCER fuel with an MgO matrix and a CERMET fuel with a Mo-92 matrix were chosen. The transient scenarios shown here are: spurious beam trip (BT), unprotected loss of flow (ULOF) and unprotected blockage accident (UBA). Since the release of fission gas and helium after cladding failure could induce a significant positive reactivity, the gas-blowdown was investigated for the transient scenarios. The present analyses showed that power excursions could be avoided by the fuel sweep-out from the core under severe accident conditions.
The Inevitability of Assessing Reasons in Debates about Conscientious Objection in Medicine.
Card, Robert F
2017-01-01
This article first critically reviews the major philosophical positions in the literature on conscientious objection and finds that they possess significant flaws. A substantial number of these problems stem from the fact that these views fail to assess the reasons offered by medical professionals in support of their objections. This observation is used to motivate the reasonability view, one part of which states: A practitioner who lodges a conscientious refusal must publicly state his or her objection as well as the reasoned basis for the objection and have these subjected to critical evaluation before a conscientious exemption can be granted (the reason-giving requirement). It is then argued that when defenders of the other philosophical views attempt to avoid granting an accommodation to spurious objections based on discrimination, empirically mistaken beliefs, or other unjustified biases, they are implicitly committed to the reason-giving requirement. This article concludes that based on these considerations, a reason-giving position such as the reasonability view possesses a decisive advantage in this debate.
Immigrant community integration in world cities
Lamanna, Fabio; Lenormand, Maxime; Salas-Olmedo, María Henar; Romanillos, Gustavo; Gonçalves, Bruno
2018-01-01
As a consequence of the accelerated globalization process, today major cities all over the world are characterized by an increasing multiculturalism. The integration of immigrant communities may be affected by social polarization and spatial segregation. How are these dynamics evolving over time? To what extent the different policies launched to tackle these problems are working? These are critical questions traditionally addressed by studies based on surveys and census data. Such sources are safe to avoid spurious biases, but the data collection becomes an intensive and rather expensive work. Here, we conduct a comprehensive study on immigrant integration in 53 world cities by introducing an innovative approach: an analysis of the spatio-temporal communication patterns of immigrant and local communities based on language detection in Twitter and on novel metrics of spatial integration. We quantify the Power of Integration of cities –their capacity to spatially integrate diverse cultures– and characterize the relations between different cultures when acting as hosts or immigrants. PMID:29538383
Consistent multiphase-field theory for interface driven multidomain dynamics
NASA Astrophysics Data System (ADS)
Tóth, Gyula I.; Pusztai, Tamás; Gránásy, László
2015-11-01
We present a multiphase-field theory for describing pattern formation in multidomain and/or multicomponent systems. The construction of the free energy functional and the dynamic equations is based on criteria that ensure mathematical and physical consistency. We first analyze previous multiphase-field theories and identify their advantageous and disadvantageous features. On the basis of this analysis, we introduce a way of constructing the free energy surface and derive a generalized multiphase description for arbitrary number of phases (or domains). The presented approach retains the variational formalism, reduces (or extends) naturally to lower (or higher) number of fields on the level of both the free energy functional and the dynamic equations, enables the use of arbitrary pairwise equilibrium interfacial properties, penalizes multiple junctions increasingly with the number of phases, ensures non-negative entropy production and the convergence of the dynamic solutions to the equilibrium solutions, and avoids the appearance of spurious phases on binary interfaces. The approach is tested for multicomponent phase separation and grain coarsening.
Siblings and Gender Differences in African-American College Attendance
ERIC Educational Resources Information Center
Loury, Linda Datcher
2004-01-01
Differences in college enrollment growth rates for African-American men and women have resulted in a large gender gap in college attendance. This paper shows that, controlling for spurious correlation with unobserved variables, having more college-educated older siblings raises rather than lowers the likelihood of college attendance for…
Elementary dispersion analysis of some mimetic discretizations on triangular C-grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korn, P., E-mail: peter.korn@mpimet.mpg.de; Danilov, S.; A.M. Obukhov Institute of Atmospheric Physics, Moscow
2017-02-01
Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noisemore » in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.« less
Segmentation of time series with long-range fractal correlations
Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.
2012-01-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997
Numerical study of the effects of icing on viscous flow over wings
NASA Technical Reports Server (NTRS)
Sankar, L. N.
1994-01-01
An improved hybrid method for computing unsteady compressible viscous flows is presented. This method divides the computational domain into two zones. In the outer zone, the unsteady full-potential equation (FPE) is solved. In the inner zone, the Navier-Stokes equations are solved using a diagonal form of an alternating-direction implicit (ADI) approximate factorization procedure. The two zones are tightly coupled so that steady and unsteady flows may be efficiently solved. Characteristic-based viscous/inviscid interface boundary conditions are employed to avoid spurious reflections at that interface. The resulting CPU times are less than 60 percent of that required for a full-blown Navier-Stokes analysis for steady flow applications and about 60 percent of the Navier-Stokes CPU times for unsteady flows in non-vector processing machines. Applications of the method are presented for a rectangular NACA 0012 wing in low subsonic steady flow at moderate and high angles of attack, and for an F-5 wing in steady and unsteady subsonic and transonic flows. Steady surface pressures are in very good agreement with experimental data and are essentially identical to Navier-Stokes predictions. Density contours show that shocks cross the viscous/inviscid interface smoothly, so that the accuracy of full Navier-Stokes equations can be retained with a significant savings in computational time.
Weese, J Scott; Jalali, Mohammad
2014-09-30
Evaluation of factors that might impact microbiota assessment is important to avoid spurious results, particularly in field and multicenter studies where sample collection may occur distant from the laboratory. This study evaluated the impact of refrigeration on next generation sequence-based assessment of the canine and feline fecal microbiota. Fecal samples were collected from seven dogs and ten cats, and analysed at baseline and after 3, 7, 10 and 14 days of storage at 4°C. There were no differences in community membership or population structure between timepoints for either dogs or cats, nor were there any differences in richness, diversity and evenness. There were few differences in relative abundance of phyla or predominant genera, with the only differences being significant increases in Actinobacteria between Days 0-14 (P = 0.0184) and 1-14 (P = 0.0182) for canine samples, and a decrease in Erysipelotrichaceae incertae sedis between Day 0 and Day 7 (median 4.9 vs 2.2%, P = 0.046) in feline samples. Storage for at least 14 days at 4°C has limited impact on culture-independent assessment of the canine and feline fecal microbiota, although changes in some individual groups may occur.
Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1990-01-01
An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.
Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.
Instability in strongly magnetized accretion discs: a global perspective
NASA Astrophysics Data System (ADS)
Das, Upasana; Begelman, Mitchell C.; Lesur, Geoffroy
2018-01-01
We examine the properties of strongly magnetized accretion discs in a global framework, with particular focus on the evolution of magnetohydrodynamic instabilities such as the magnetorotational instability (MRI). Work by Pessah & Psaltis showed that MRI is stabilized beyond a critical toroidal field in compressible, differentially rotating flows and, also, reported the appearance of two new instabilities beyond this field. Their results stemmed from considering geometric curvature effects due to the suprathermal background toroidal field, which had been previously ignored in weak-field studies. However, their calculations were performed under the local approximation, which poses the danger of introducing spurious behaviour due to the introduction of global geometric terms in an otherwise local framework. In order to avoid this, we perform a global eigenvalue analysis of the linearized MHD equations in cylindrical geometry. We confirm that MRI indeed tends to be highly suppressed when the background toroidal field attains the Pessah-Psaltis limit. We also observe the appearance of two new instabilities that emerge in the presence of highly suprathermal toroidal fields. These results were additionally verified using numerical simulations in PLUTO. There are, however, certain differences between the the local and global results, especially in the vertical wavenumber occupancies of the various instabilities, which we discuss in detail. We also study the global eigenfunctions of the most unstable modes in the suprathermal regime, which are inaccessible in the local analysis. Overall, our findings emphasize the necessity of a global treatment for accurately modelling strongly magnetized accretion discs.
2015-01-01
Novel physicochemistries of engineered nanomaterials (ENMs) offer considerable commercial potential for new products and processes, but also the possibility of unforeseen and negative consequences upon ENM release into the environment. Investigations of ENM ecotoxicity have revealed that the unique properties of ENMs and a lack of appropriate test methods can lead to results that are inaccurate or not reproducible. The occurrence of spurious results or misinterpretations of results from ENM toxicity tests that are unique to investigations of ENMs (as opposed to traditional toxicants) have been reported, but have not yet been systemically reviewed. Our objective in this manuscript is to highlight artifacts and misinterpretations that can occur at each step of ecotoxicity testing: procurement or synthesis of the ENMs and assessment of potential toxic impurities such as metals or endotoxins, ENM storage, dispersion of the ENMs in the test medium, direct interference with assay reagents and unacknowledged indirect effects such as nutrient depletion during the assay, and assessment of the ENM biodistribution in organisms. We recommend thorough characterization of initial ENMs including measurement of impurities, implementation of steps to minimize changes to the ENMs during storage, inclusion of a set of experimental controls (e.g., to assess impacts of nutrient depletion, ENM specific effects, impurities in ENM formulation, desorbed surface coatings, the dispersion process, and direct interference of ENM with toxicity assays), and use of orthogonal measurement methods when available to assess ENMs fate and distribution in organisms. PMID:24617739
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Endo, Satoshi; Wong, May
Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic substepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic substeps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic substeps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations
Xiao, Heng; Endo, Satoshi; Wong, May; ...
2015-10-29
Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
The role of spurious correlation in the development of a komatiite alteration model
NASA Astrophysics Data System (ADS)
Butler, John C.
1986-11-01
Procedures for detecting alterations in komatiites are described. The research of Pearson (1897) on spurious correlation and of Chayes (1949, 1971) on ratio correlation is reviewed. The equations for the ratio correlation procedure are provided. The ratio correlation procedure is applied to the komatiites from Gorgona Island and the Barberton suite. Plots of the molecular proportion ratios of (FeO + MgO)/TiO2 versus SiO2/TiO2, and correlation coefficients for the komatiites are presented and analyzed.
Reasoning about energy in qualitative simulation
NASA Technical Reports Server (NTRS)
Fouche, Pierre; Kuipers, Benjamin J.
1992-01-01
While possible behaviors of a mechanism that are consistent with an incomplete state of knowledge can be predicted through qualitative modeling and simulation, spurious behaviors corresponding to no solution of any ordinary differential equation consistent with the model may be generated. The present method for energy-related reasoning eliminates an important source of spurious behaviors, as demonstrated by its application to a nonlinear, proportional-integral controlled. It is shown that such qualitative properties of such a system as stability and zero-offset control are captured by the simulation.
Nonlinear dynamics and numerical uncertainties in CFD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.
A patient with serum creatinine of 61 mg/dl
Sriram, S.; Srinivas, S.; Naveen, P. S. R.
2017-01-01
Spurious elevation of serum creatinine by Jaffe assay is known to occur due to a variety of substances. This results in subjecting the patient to invasive and complicated procedures such as dialysis. We report a rare case of false elevation of this renal parameter following exposure to an organic solvent. PMID:28182048
High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves
NASA Technical Reports Server (NTRS)
Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.
2012-01-01
In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.
The Influence of Being under the Influence: Alcohol Effects on Adolescent Violence
ERIC Educational Resources Information Center
Felson, Richard B.; Teasdale, Brent; Burchfield, Keri B.
2008-01-01
The authors examine the relationship between intoxication, chronic alcohol use, and violent behavior using data from the National Longitudinal Study of Adolescent Health. The authors introduce a method for disentangling spuriousness from the causal effects of situational variables. Their results suggest that drinkers are much more likely to commit…
ERIC Educational Resources Information Center
Porter, Kristin E.
2018-01-01
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
J.E. Jakes; C.R. Frihart; J.F. Beecher; R.J. Moon; P.J. Resto; Z.H. Melgarejo; O.M. Saurez; H. Baumgart; A.A. Elmustafa; D.S. Stone
2009-01-01
Whenever a nanoindent is placed near an edge, such as the free edge of the specimen or heterophase interface intersecting the surface, the elastic discontinuity associated with the edge produces artifacts in the load-depth data. Unless properly handled in the data analysis, the artifacts can produce spurious results that obscure any real trends in properties as...
NASA Astrophysics Data System (ADS)
Ghods, M.; Lauer, M.; Upadhyay, S. R.; Grugel, R. N.; Tewari, S. N.; Poirier, D. R.
2018-04-01
Formation of spurious grains during directional solidification (DS) of Al-7 wt.% Si and Al-19 wt.% Cu alloys through an abrupt increase in cross-sectional area has been examined by experiments and by numerical simulations. Stray grains were observed in the Al-19 wt.% Cu samples and almost none in the Al-7 wt.% Si. The locations of the stray grains correlate well where numerical solutions indicate the solute-rich melt to be flowing up the thermal gradient faster than the isotherm velocity. It is proposed that the spurious grain formation occurred by fragmentation of slender tertiary dendrite arms was enhanced by thermosolutal convection. In Al-7 wt.% Si, the dendrite fragments sink in the surrounding melt and get trapped in the dendritic array growing around them, and therefore they do not grow further. In the Al-19 wt.% Cu alloy, on the other hand, the dendrite fragments float in the surrounding melt and some find conducive thermal conditions for further growth and become stray grains.
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
Intragenic DNA methylation prevents spurious transcription initiation.
Neri, Francesco; Rapelli, Stefania; Krepelova, Anna; Incarnato, Danny; Parlato, Caterina; Basile, Giulia; Maldotti, Mara; Anselmi, Francesca; Oliviero, Salvatore
2017-03-02
In mammals, DNA methylation occurs mainly at CpG dinucleotides. Methylation of the promoter suppresses gene expression, but the functional role of gene-body DNA methylation in highly expressed genes has yet to be clarified. Here we show that, in mouse embryonic stem cells, Dnmt3b-dependent intragenic DNA methylation protects the gene body from spurious RNA polymerase II entry and cryptic transcription initiation. Using different genome-wide approaches, we demonstrate that this Dnmt3b function is dependent on its enzymatic activity and recruitment to the gene body by H3K36me3. Furthermore, the spurious transcripts can either be degraded by the RNA exosome complex or capped, polyadenylated, and delivered to the ribosome to produce aberrant proteins. Elongating RNA polymerase II therefore triggers an epigenetic crosstalk mechanism that involves SetD2, H3K36me3, Dnmt3b and DNA methylation to ensure the fidelity of gene transcription initiation, with implications for intragenic hypomethylation in cancer.
Comparison of iSTAT and EPOC Blood Analyzers
2017-10-25
requires accurate blood analysis across a range of environmental conditions and, in extreme circumstances, use beyond the expiration date. We compared... analysis across a range of environmental conditions and, in extreme circumstances, use beyond the expiration date. We compared gold standard laboratory...temperatures for either device can result in spurious results, particularly for blood gases. 2.0 BACKGROUND Blood analysis is a critical aspect of
Linear stability analysis of collective neutrino oscillations without spurious modes
NASA Astrophysics Data System (ADS)
Morinaga, Taiki; Yamada, Shoichi
2018-01-01
Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1995-01-01
Particle Image Velocimetry provides a means of measuring the instantaneous 2-component velocity field across a planar region of a seeded flowfield. In this work only two camera, single exposure images are considered where both cameras have the same view of the illumination plane. Two competing techniques which yield unambiguous velocity vector direction information have been widely used for reducing the single exposure, multiple image data: cross-correlation and particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. The correlation technique requires identification of the correlation peak on the correlation plane corresponding to the average displacement of particles across the subregion. Noise on the images and particle dropout contribute to spurious peaks on the correlation plane, leading to misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak on the correlation plane, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus velocity. The advantage of this technique is the improved spatial resolution which is available from the particle tracking operation. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two staged approach offers a velocimetric technique capable of measuring particle velocities with high spatial resolution over a broad range of seeding densities.
NASA Astrophysics Data System (ADS)
Zhao, T.; Wang, J.; Dai, A.
2015-12-01
Many multi-decadal atmospheric reanalysis products are avialable now, but their consistencies and reliability are far from perfect. In this study, atmospheric precipitable water (PW) from the NCEP/NCAR, NCEP/DOE, MERRA, JRA-55, JRA-25, ERA-Interim, ERA-40, CFSR and 20CR reanalyses is evaluated against homogenized radiosonde observations over China during 1979-2012 (1979-2001 for ERA-40). Results suggest that the PW biases in the reanalyses are within ˜20% for most of northern and eastern China, but the reanalyses underestimate the observed PW by 20%-40% over western China, and by ˜60% over the southwestern Tibetan Plateau. The newer-generation reanalyses (e.g., JRA25, JRA55, CFSR and ERA-Interim) have smaller root-mean-square error (RMSE) than the older-generation ones (NCEP/NCAR, NCEP/DOE and ERA-40). Most of the reanalyses reproduce well the observed PW climatology and interannual variations over China. However, few reanalyses capture the observed long-term PW changes, primarily because they show spurious wet biases before about 2002. This deficiency results mainly from the discontinuities contained in reanalysis RH fields in the mid-lower troposphere due to the wet bias in older radiosonde records that are assimilated into the reanalyses. An empirical orthogonal function (EOF) analysis revealed two leading modes that represent the long-term PW changes and ENSO-related interannual variations with robust spatial patterns. The reanalysis products, especially the MERRA and JRA-25, roughly capture these EOF modes, which account for over 50% of the total variance. The results show that even during the post-1979 satellite era, discontinuities in radiosonde data can still induce large spurious long-term changes in reanalysis PW and other related fields. Thus, more efforts are needed to remove spurious changes in input data for future long-term reanlayses.
Some Aspects of Nonlinear Dynamics and CFD
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Merriam, Marshal (Technical Monitor)
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with examples of spurious behavior observed in CFD computations.
Consideration of Dynamical Balances
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
The quasi-balance of extra-tropical tropospheric dynamics is a fundamental aspect of nature. If an atmospheric analysis does not reflect such balance sufficiently well, the subsequent forecast will exhibit unrealistic behavior associated with spurious fast-propagating gravity waves. Even if these eventually damp, they can create poor background fields for a subsequent analysis or interact with moist physics to create spurious precipitation. The nature of this problem will be described along with the reasons for atmospheric balance and techniques for mitigating imbalances. Attention will be focused on fundamental issues rather than on recipes for various techniques.
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.
1980-01-01
The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.
Using the Graded Response Model to Control Spurious Interactions in Moderated Multiple Regression
ERIC Educational Resources Information Center
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W.
2012-01-01
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
NASA Astrophysics Data System (ADS)
Asghar, Haroon; McInerney, John G.
2017-09-01
We demonstrate an asymmetric dual-loop feedback scheme to suppress external cavity side-modes induced in self-mode-locked quantum-dash lasers with conventional single and dual-loop feedback. In this letter, we achieved optimal suppression of spurious tones by optimizing the length of second delay time. We observed that asymmetric dual-loop feedback, with large (~8x) disparity in cavity lengths, eliminates all external-cavity side-modes and produces flat RF spectra close to the main peak with low timing jitter compared to single-loop feedback. Significant reduction in RF linewidth and reduced timing jitter was also observed as a function of increased second feedback delay time. The experimental results based on this feedback configuration validate predictions of recently published numerical simulations. This interesting asymmetric dual-loop feedback scheme provides simplest, efficient and cost effective stabilization of side-band free optoelectronic oscillators based on mode-locked lasers.
Evaluation of Mobile Phone Interference With Aircraft GPS Navigation Systems
NASA Technical Reports Server (NTRS)
Pace, Scott; Oria, A. J.; Guckian, Paul; Nguyen, Truong X.
2004-01-01
This report compiles and analyzes tests that were conducted to measure cell phone spurious emissions in the Global Positioning System (GPS) radio frequency band that could affect the navigation system of an aircraft. The cell phone in question had, as reported to the FAA (Federal Aviation Administration), caused interference to several GPS receivers on-board a small single engine aircraft despite being compliant with data filed at the time with the FCC by the manufacturer. NASA (National Aeronautics and Space Administration) and industry tests show that while there is an emission in the 1575 MHz GPS band due to a specific combination of amplifier output impedance and load impedance that induces instability in the power amplifier, these spurious emissions (i.e., not the intentional transmit signal) are similar to those measured on non-intentionally transmitting devices such as, for example, laptop computers. Additional testing on a wide sample of different commercial cell phones did not result in any emission in the 1575 MHz GPS Band above the noise floor of the measurement receiver.
Examining the relationship between religiosity and self-control as predictors of prison deviance.
Kerley, Kent R; Copes, Heith; Tewksbury, Richard; Dabney, Dean A
2011-12-01
The relationship between religiosity and crime has been the subject of much empirical debate and testing over the past 40 years. Some investigators have argued that observed relationships between religion and crime may be spurious because of self-control, arousal, or social control factors. The present study offers the first investigation of religiosity, self-control, and deviant behavior in the prison context. We use survey data from a sample of 208 recently paroled male inmates to test the impact of religiosity and self-control on prison deviance. The results indicate that two of the three measures of religiosity may be spurious predictors of prison deviance after accounting for self-control. Participation in religious services is the only measure of religiosity to significantly reduce the incidence of prison deviance when controlling for demographic factors, criminal history, and self-control. We conclude with implications for future studies of religiosity, self-control, and deviance in the prison context.
Examining the Relationship Between Religiosity and Self-Control as Predictors of Prison Deviance.
Kerley, Kent R; Copes, Heith; Tewksbury, Richard; Dabney, Dean A
2010-11-29
The relationship between religiosity and crime has been the subject of much empirical debate and testing over the past 40 years. Some investigators have argued that observed relationships between religion and crime may be spurious because of self-control, arousal, or social control factors. The present study offers the first investigation of religiosity, self-control, and deviant behavior in the prison context. We use survey data from a sample of 208 recently paroled male inmates to test the impact of religiosity and self-control on prison deviance. The results indicate that two of the three measures of religiosity may be spurious predictors of prison deviance after accounting fovr self-control. Participation in religious services is the only measure of religiosity to significantly reduce the incidence of prison deviance when controlling for demographic factors, criminal history, and self-control. We conclude with implications for future studies of religiosity, self-control, and deviance in the prison context.
NASA Astrophysics Data System (ADS)
Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.
2005-05-01
Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki
Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less
Definition of Hydrologic Response Units in Depression Plagued Digital Elevation Models
NASA Astrophysics Data System (ADS)
Lindsay, J. B.; Creed, I. F.
2002-12-01
Definition of hydrologic response units using digital elevation models (DEMs) is sensitive to the occurrence of topographic depressions. Real depressions can be important to the hydrology and biogeochemistry a catchment, often coinciding with areas of surface saturation. Artifact depressions, in contrast, result in digital "black holes", artificially truncating the hydrologic flow lengths and altering hydrologic flow directions, parameters that are often used in defining hydrologic response units. Artifact depressions must be removed from DEMs prior to definition of hydrologic response units. Depression filling or depression trenching techniques can be used to remove these artifacts. Depression trenching methods are often considered more appropriate because they preserve the topographic variability within a depression thus avoiding the creation of spurious flat areas. Current trenching algorithms are relatively slow and unable to process very large or noisy DEMs. A new trenching algorithm that overcomes these limitations is described. The algorithm does not require finding depression catchments or outlets, nor does it need special handling for nested depressions. Therefore, artifacts can be removed from large or noisy DEMs efficiently, while minimizing the number of grid elevations requiring modification. The resulting trench is a monotonically descending path starting from the lowest point in a depression, passing through the depression's outlet, and ending at a point of lower elevation outside the depression. The importance of removing artifact depressions is demonstrated by showing hydrologic response units both before and after the removal of artifact depressions from the DEM.
Spurious sea ice formation caused by oscillatory ocean tracer advection schemes
NASA Astrophysics Data System (ADS)
Naughten, Kaitlin A.; Galton-Fenzi, Benjamin K.; Meissner, Katrin J.; England, Matthew H.; Brassington, Gary B.; Colberg, Frank; Hattermann, Tore; Debernard, Jens B.
2017-08-01
Tracer advection schemes used by ocean models are susceptible to artificial oscillations: a form of numerical error whereby the advected field alternates between overshooting and undershooting the exact solution, producing false extrema. Here we show that these oscillations have undesirable interactions with a coupled sea ice model. When oscillations cause the near-surface ocean temperature to fall below the freezing point, sea ice forms for no reason other than numerical error. This spurious sea ice formation has significant and wide-ranging impacts on Southern Ocean simulations, including the disappearance of coastal polynyas, stratification of the water column, erosion of Winter Water, and upwelling of warm Circumpolar Deep Water. This significantly limits the model's suitability for coupled ocean-ice and climate studies. Using the terrain-following-coordinate ocean model ROMS (Regional Ocean Modelling System) coupled to the sea ice model CICE (Community Ice CodE) on a circumpolar Antarctic domain, we compare the performance of three different tracer advection schemes, as well as two levels of parameterised diffusion and the addition of flux limiters to prevent numerical oscillations. The upwind third-order advection scheme performs better than the centered fourth-order and Akima fourth-order advection schemes, with far fewer incidents of spurious sea ice formation. The latter two schemes are less problematic with higher parameterised diffusion, although some supercooling artifacts persist. Spurious supercooling was eliminated by adding flux limiters to the upwind third-order scheme. We present this comparison as evidence of the problematic nature of oscillatory advection schemes in sea ice formation regions, and urge other ocean/sea-ice modellers to exercise caution when using such schemes.
Heinrich, Gabriel; Breen, Matthew; Benware, Sheila; Vollum, Nicole; Morris, Kristin; Knutsen, Chad; Kowalski, Joseph D.; Campbell, Scott; Biehler, Jerry; Vreeke, Mark S.; Vanderwerf, Scott M.; Castle, Jessica R.; Cargill, Robert S.
2017-01-01
Abstract Background: Labeling prohibits delivery of insulin at the site of subcutaneous continuous glucose monitoring (CGM). Integration of the sensing and insulin delivery functions into a single device would likely increase the usage of CGM in persons with type 1 diabetes. Methods: To understand the nature of such interference, we measured glucose at the site of bolus insulin delivery in swine using a flexible electrode strip that was laminated to the outer wall of an insulin delivery cannula. In terms of sensing design, we compared H2O2-measuring sensors biased at 600 mV with redox mediator-type sensors biased at 175 mV. Results: In H2O2-measuring sensors, but not in sensors with redox-mediated chemistry, a spurious rise in current was seen after insulin lis-pro boluses. This prolonged artifact was accompanied by electrode poisoning. In redox-mediated sensors, the patterns of sensor signals acquired during delivery of saline and without any liquid delivery were similar to those acquired during insulin delivery. Conclusion: Considering in vitro and in vivo findings together, it became clear that the mechanism of interference is the oxidation, at high bias potentials, of phenolic preservatives present in insulin formulations. This effect can be avoided by the use of redox mediator chemistry using a low bias potential. PMID:28221814
Preanalytical Nonconformity Management Regarding Primary Tube Mixing in Brazil.
Lima-Oliveira, Gabriel; Cesare Guidi, Gian; Guimaraes, Andre Valpassos Pacifici; Abol Correa, Jose; Lippi, Giuseppe
2017-01-01
The multifaceted clinical laboratory process is divided in three essential phases: the preanalytical, analytical and postanalytical phase. Problems emerging from the preanalytical phase are responsible for more than 60% of laboratory errors. This report is aimed at highlighting and discussing nonconformity (e.g., nonstandardized procedures) in primary blood tube mixing immediately after blood collection by venipuncture with evacuated tube systems. From January 2015 to December 2015, fifty different laboratory quality managers from Brazil were contacted to request their internal audit reports on nonconformity regarding primary blood tube mixing immediately after blood collection by venipuncture performed using evacuated tube systems. A minority of internal audits (i.e., 4%) concluded that evacuated blood tubes were not accurately mixed after collection, whereas more than half of them reported that evacuated blood tubes were vigorously mixed immediately after collection, thus magnifying the risk of producing spurious hemolysis. Despite the vast ma jority of centers declaring that evacuated blood tubes were mixed gently and carefully, the overall number of inversions was found to be different from that recommended by the manufacturer. Since the turbulence generated by the standard vacuum pressure inside the primary evacuated tubes seems to be sufficient for providing solubilization, mixing and stabilization between additives and blood during venipuncture, avoidance of primary tube mixing probably does not introduce a major bias in tests results and may not be considered a nonconformity during audits for accreditation.
A method to analyze molecular tagging velocimetry data using the Hough transform.
Sanchez-Gonzalez, R; McManamen, B; Bowersox, R D W; North, S W
2015-10-01
The development of a method to analyze molecular tagging velocimetry data based on the Hough transform is presented. This method, based on line fitting, parameterizes the grid lines "written" into a flowfield. Initial proof-of-principle illustration of this method was performed to obtain two-component velocity measurements in the wake of a cylinder in a Mach 4.6 flow, using a data set derived from computational fluid dynamics simulations. The Hough transform is attractive for molecular tagging velocimetry applications since it is capable of discriminating spurious features that can have a biasing effect in the fitting process. Assessment of the precision and accuracy of the method were also performed to show the dependence on analysis window size and signal-to-noise levels. The accuracy of this Hough transform-based method to quantify intersection displacements was determined to be comparable to cross-correlation methods. The employed line parameterization avoids the assumption of linearity in the vicinity of each intersection, which is important in the limit of drastic grid deformations resulting from large velocity gradients common in high-speed flow applications. This Hough transform method has the potential to enable the direct and spatially accurate measurement of local vorticity, which is important in applications involving turbulent flowfields. Finally, two-component velocity determinations using the Hough transform from experimentally obtained images are presented, demonstrating the feasibility of the proposed analysis method.
Association of vWA and TPOX Polymorphisms with Venous Thrombosis in Mexican Mestizos
Meraz-Ríos, Marco Antonio; Majluf-Cruz, Abraham; Santana, Carla; Noris, Gino; Camacho-Mejorado, Rafael; Acosta-Saavedra, Leonor C.; Calderón-Aranda, Emma S.; Hernández-Juárez, Jesús; Magaña, Jonathan J.; Gómez, Rocío
2014-01-01
Objective. Venous thromboembolism (VTE) is a multifactorial disorder and, worldwide, the most important cause of morbidity and mortality. Genetic factors play a critical role in its aetiology. Microsatellites are the most important source of human genetic variation having more phenotypic effect than many single nucleotide polymorphisms. Hence, we evaluate a possible relationship between VTE and the genetic variants in von Willebrand factor, human alpha fibrinogen, and human thyroid peroxidase microsatellites to identify possible diagnostic markers. Methods. Genotypes were obtained from 177 patients with VTE and 531 nonrelated individuals using validated genotyping methods. The allelic frequencies were compared; Bayesian methods were used to correct population stratification to avoid spurious associations. Results. The vWA-18, TPOX-9, and TPOX-12 alleles were significantly associated with VTE. Moreover, subjects bearing the combination vWA-18/TPOX-12 loci exhibited doubled risk for VTE (95% CI = 1.02–3.64), whereas the combination vWA-18/TPOX-9 showed an OR = 10 (95% CI = 4.93–21.49). Conclusions. The vWA and TPOX microsatellites are good candidate biomarkers in venous thromboembolism diseases and could help to elucidate their origins. Additionally, these polymorphisms could become useful markers for genetic studies of VTE in the Mexican population; however, further studies should be done owing that this data only show preliminary evidence. PMID:25250329
NASA Astrophysics Data System (ADS)
Santosa, Hendrik; Aarabi, Ardalan; Perlman, Susan B.; Huppert, Theodore J.
2017-05-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of red to near-infrared light to measure changes in cerebral blood oxygenation. Spontaneous (resting state) functional connectivity (sFC) has become a critical tool for cognitive neuroscience for understanding task-independent neural networks, revealing pertinent details differentiating healthy from disordered brain function, and discovering fluctuations in the synchronization of interacting individuals during hyperscanning paradigms. Two of the main challenges to sFC-NIRS analysis are (i) the slow temporal structure of both systemic physiology and the response of blood vessels, which introduces false spurious correlations, and (ii) motion-related artifacts that result from movement of the fNIRS sensors on the participants' head and can introduce non-normal and heavy-tailed noise structures. In this work, we systematically examine the false-discovery rates of several time- and frequency-domain metrics of functional connectivity for characterizing sFC-NIRS. Specifically, we detail the modifications to the statistical models of these methods needed to avoid high levels of false-discovery related to these two sources of noise in fNIRS. We compare these analysis procedures using both simulated and experimental resting-state fNIRS data. Our proposed robust correlation method has better performance in terms of being more reliable to the noise outliers due to the motion artifacts.
Video modelling and reducing anxiety related to dental injections - a randomised clinical trial.
Al-Namankany, A; Petrie, A; Ashley, P
2014-06-01
This study was part of a successfully completed PhD and was presented at the IADR/AADR General Session (2013) in Seattle, Washington, USA. The report of this clinical trial conforms to the CONSORT statement. A randomised controlled trial to investigate if video modelling can influence a child's anxiety before the administration of local anaesthesia (LA). A sample of 180 (6- to 12-year-old) children due to have dental treatments under LA were randomly allocated to the modelling video or the control video (oral hygiene instruction). The level of anxiety was recorded before and after watching the video on the Abeer Children Dental Anxiety Scale (ACDAS) and the child's ability to cope with the subsequent procedure was assessed on the visual analogue scale (VAS). A two group chi-square test was used as the basis for the sample size calculation; a significance level of 0.025 was chosen rather than the conventional 0.05 to avoid spurious results arising from multiple testing. Children in the test group had significantly less anxiety after watching the video than children in the control group throughout the subsequent dental procedure; in particular at the time of the LA administration (p <0.001). Video modelling appeared to be effective at reducing dental anxiety and has a significant impact on needle phobia in children.
Some Results Relevant to Statistical Closures for Compressible Turbulence
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
For weakly compressible turbulent fluctuations there exists a small parameter, the square of the fluctuating Mach number, that allows an investigation using a perturbative treatment. The consequences of such a perturbative analysis in three different subject areas are described: 1) initial conditions in direct numerical simulations, 2) an explanation for the oscillations seen in the compressible pressure in the direct numerical simulations of homogeneous shear, and 3) for turbulence closures accounting for the compressibility of velocity fluctuations. Initial conditions consistent with small turbulent Mach number asymptotics are constructed. The importance of consistent initial conditions in the direct numerical simulation of compressible turbulence is dramatically illustrated: spurious oscillations associated with inconsistent initial conditions are avoided, and the fluctuating dilatational field is some two orders of magnitude smaller for a compressible isotropic turbulence. For the isotropic decay it is shown that the choice of initial conditions can change the scaling law for the compressible dissipation. A two-time expansion of the Navier-Stokes equations is used to distinguish compressible acoustic and compressible advective modes. A simple conceptual model for weakly compressible turbulence - a forced linear oscillator is described. It is shown that the evolution equations for the compressible portions of turbulence can be understood as a forced wave equation with refraction. Acoustic modes of the flow can be amplified by refraction and are able to manifest themselves in large fluctuations of the compressible pressure.
Lin, Tiger W.; Das, Anup; Krishnan, Giri P.; Bazhenov, Maxim; Sejnowski, Terrence J.
2017-01-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals. PMID:28777719
Lin, Tiger W; Das, Anup; Krishnan, Giri P; Bazhenov, Maxim; Sejnowski, Terrence J
2017-10-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Income inequality and status seeking: searching for positional goods in unequal U.S. States.
Walasek, Lukasz; Brown, Gordon D A
2015-04-01
It is well established that income inequality is associated with lower societal well-being, but the psychosocial causes of this relationship are poorly understood. A social-rank hypothesis predicts that members of unequal societies are likely to devote more of their resources to status-seeking behaviors such as acquiring positional goods. We used Google Correlate to find search terms that correlated with our measure of income inequality, and we controlled for income and other socioeconomic factors. We found that of the 40 search terms used more frequently in states with greater income inequality, more than 70% were classified as referring to status goods (e.g., designer brands, expensive jewelry, and luxury clothing). In contrast, 0% of the 40 search terms used more frequently in states with less income inequality were classified as referring to status goods. Finally, we showed how residual-based analysis offers a new methodology for using Google Correlate to provide insights into societal attitudes and motivations while avoiding confounds and high risks of spurious correlations. © The Author(s) 2015.
Mulholland, David J; Cox, Michael; Read, Jason; Rennie, Paul; Nelson, Colleen
2004-05-01
Renilla based reporters are frequently used as transfection controls for luciferase transcriptional reporter assays. However, recent evidence suggests that a commonly used reporter (HSV-thymidine kinase driven Renilla) is responsive to androgen receptor (AR) and glucocorticoid receptors in the presence of the cognate ligands, dihydrotestosterone (DHT) and dexamethasone (DEX), respectively [1]. We further validate this important technical difficulty by illustrating that in LNCaP prostate cancer cells, spurious Renilla luciferase activity is a function of (a) the promoter driving Renilla expression, (b) the presence of co-transfected transgenes, and (c) the androgen responsiveness of the cell line used. Using inhibitors of transcription and translation we showed that transcript interference or translational modulation is not a major means by which androgens affect Renilla luciferase activity. As luciferase reporter assays are a frequent means of studying transcriptional co-regulation in the highly androgen dependent LNCaP cell line, our data serves as a cautionary note that alternative normalization techniques should be employed to avoid misinterpretation of data. Copyright 2004 Wiley-Liss, Inc.
Can the Lorenz-Gauge Potentials Be Considered Physical Quantities?
ERIC Educational Resources Information Center
Heras, Jose A.; Fernandez-Anaya, Guillermo
2010-01-01
Two results support the idea that the scalar and vector potentials in the Lorenz gauge can be considered to be physical quantities: (i) they separately satisfy the properties of causality and propagation at the speed of light and do not imply spurious terms and (ii) they can naturally be written in a manifestly covariant form. In this paper we…
Twist-averaged boundary conditions for nuclear pasta Hartree-Fock calculations
Schuetrumpf, B.; Nazarewicz, W.
2015-10-21
Nuclear pasta phases, present in the inner crust of neutron stars, are associated with nucleonic matter at subsaturation densities arranged in regular shapes. Those complex phases, residing in a layer which is approximately 100-m thick, impact many features of neutron stars. Theoretical quantum-mechanical simulations of nuclear pasta are usually carried out in finite three-dimensional boxes assuming periodic boundary conditions. The resulting solutions are affected by spurious finite-size effects. To remove spurious finite-size effects, it is convenient to employ twist-averaged boundary conditions (TABC) used in condensed matter, nuclear matter, and lattice quantum chromodynamics applications. In this work, we study the effectivenessmore » of TABC in the context of pasta phase simulations within nuclear density functional theory. We demonstrated that by applying TABC reliable results can be obtained from calculations performed in relatively small volumes. By studying various contributions to the total energy, we gain insights into pasta phases in mid-density range. Future applications will include the TABC extension of the adaptive multiresolution 3D Hartree-Fock solver and Hartree-Fock-Bogoliubov TABC applications to superfluid pasta phases and complex nucleonic topologies as in fission.« less
The introduction of spurious models in a hole-coupled Fabry-Perot open resonator
NASA Technical Reports Server (NTRS)
Cook, Jerry D.; Long, Kenwyn J.; Heinen, Vernon O.; Stankiewicz, Norbert
1992-01-01
A hemispherical open resonator has previously been used to make relative comparisons of the surface resistivity of metallic thin-film samples in the submillimeter wavelength region. This resonator is fed from a far-infrared laser via a small coupling hole in the center of the concave spherical mirror. The experimental arrangement, while desirable as a coupling geometry for monitoring weak emissions from the cavity, can lead to the introduction of spurious modes into the cavity. Sources of these modes are identified, and a simple alteration of the experimental apparatus to eliminate such modes is suggested.
2014-01-01
Expression quantitative trait loci (eQTL) mapping is a tool that can systematically identify genetic variation affecting gene expression. eQTL mapping studies have shown that certain genomic locations, referred to as regulatory hotspots, may affect the expression levels of many genes. Recently, studies have shown that various confounding factors may induce spurious regulatory hotspots. Here, we introduce a novel statistical method that effectively eliminates spurious hotspots while retaining genuine hotspots. Applied to simulated and real datasets, we validate that our method achieves greater sensitivity while retaining low false discovery rates compared to previous methods. PMID:24708878
Acousto-optics bandwidth broadening in a Bragg cell based on arbitrary synthesized signal methods.
Peled, Itay; Kaminsky, Ron; Kotler, Zvi
2015-06-01
In this work, we present the advantages of driving a multichannel acousto-optical deflector (AOD) with a digitally synthesized multifrequency RF signal. We demonstrate a significant bandwidth broadening of ∼40% by providing well-tuned phase control of the array transducers. Moreover, using a multifrequency, complex signal, we manage to suppress the harmonic deflections and return most of the spurious energy to the main beam. This method allows us to operate the AOD with more than an octave of bandwidth with negligible spurious energy going to the harmonic beams and a total bandwidth broadening of over 70%.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells
Trimper, John B.; Trettel, Sean G.; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal “place cells,” neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These “replay” events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay. PMID:28824388
Lippi, Giuseppe; Cervellin, Gianfranco; Mattiuzzi, Camilla
2013-01-01
Background: A number of preanalytical activities strongly influence sample quality, especially those related to sample collection. Since blood drawing through intravenous catheters is reported as a potential source of erythrocyte injury, we performed a critical review and meta-analysis about the risk of catheter-related hemolysis. Materials and methods: We performed a systematic search on PubMed, Web of Science and Scopus to estimate the risk of spurious hemolysis in blood samples collected from intravenous catheters. A meta-analysis with calculation of Odds ratio (OR) and Relative risk (RR) along with 95% Confidence interval (95% CI) was carried out using random effect mode. Results: Fifteen articles including 17 studies were finally selected. The total number of patients was 14,796 in 13 studies assessing catheter and evacuated tubes versus straight needle and evacuated tubes, and 1251 in 4 studies assessing catheter and evacuated tubes versus catheter and manual aspiration. A significant risk of hemolysis was found in studies assessing catheter and evacuated tubes versus straight needle and evacuated tubes (random effect OR 3.4; 95% CI = 2.9–3.9 and random effect RR 1.07; 95% CI = 1.06–1.08), as well as in studies assessing catheter and evacuated tubes versus catheter and manual aspiration of blood (OR 3.7; 95% CI = 2.7–5.1 and RR 1.32; 95% CI = 1.24–1.40). Conclusions: Sample collection through intravenous catheters is associated with significant higher risk of spurious hemolysis as compared with standard blood drawn by straight needle, and this risk is further amplified when intravenous catheter are associated with primary evacuated blood tubes as compared with manual aspiration. PMID:23894864
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Vay, J. -L.
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Frequency domain phase noise analysis of dual injection-locked optoelectronic oscillators.
Jahanbakht, Sajad
2016-10-01
Dual injection-locked optoelectronic oscillators (DIL-OEOs) have been introduced as a means to achieve very low-noise microwave oscillations while avoiding the large spurious peaks that occur in the phase noise of the conventional single-loop OEOs. In these systems, two OEOs are inter-injection locked to each other. The OEO with the longer optical fiber delay line is called the master OEO, and the other is called the slave OEO. Here, a frequency domain approach for simulating the phase noise spectrum of each of the OEOs in a DIL-OEO system and based on the conversion matrix approach is presented. The validity of the new approach is verified by comparing its results with previously published data in the literature. In the new approach, first, in each of the master or slave OEOs, the power spectral densities (PSDs) of two white and 1/f noise sources are optimized such that the resulting simulated phase noise of any of the master or slave OEOs in the free-running state matches the measured phase noise of that OEO. After that, the proposed approach is able to simulate the phase noise PSD of both OEOs at the injection-locked state. Because of the short run-time requirements, especially compared to previously proposed time domain approaches, the new approach is suitable for optimizing the power injection ratios (PIRs), and potentially other circuit parameters, in order to achieve good performance regarding the phase noise in each of the OEOs. Through various numerical simulations, the optimum PIRs for achieving good phase noise performance are presented and discussed; they are in agreement with the previously published results. This further verifies the applicability of the new approach. Moreover, some other interesting results regarding the spur levels are also presented.
NASA Astrophysics Data System (ADS)
Zhao, Tianbao; Wang, Juanhuai; Dai, Aiguo
2015-10-01
Many multidecadal atmospheric reanalysis products are available now, but their consistencies and reliability are far from perfect. In this study, atmospheric precipitable water (PW) from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR), NCEP/Department of Energy (DOE), Modern Era Retrospective-Analysis for Research and Applications (MERRA), Japanese 55 year Reanalysis (JRA-55), JRA-25, ERA-Interim, ERA-40, Climate Forecast System Reanalysis (CFSR), and 20th Century Reanalysis version 2 is evaluated against homogenized radiosonde observations over China during 1979-2012 (1979-2001 for ERA-40). Results suggest that the PW biases in the reanalyses are within ˜20% for most of northern and eastern China, but the reanalyses underestimate the observed PW by 20%-40% over western China and by ˜60% over the southwestern Tibetan Plateau. The newer-generation reanalyses (e.g., JRA25, JRA55, CFSR, and ERA-Interim) have smaller root-mean-square error than the older-generation ones (NCEP/NCAR, NCEP/DOE, and ERA-40). Most of the reanalyses reproduce well the observed PW climatology and interannual variations over China. However, few reanalyses capture the observed long-term PW changes, primarily because they show spurious wet biases before about 2002. This deficiency results mainly from the discontinuities contained in reanalysis relative humidity fields in the middle-lower troposphere due to the wet bias in older radiosonde records that are assimilated into the reanalyses. An empirical orthogonal function (EOF) analysis revealed two leading modes that represent the long-term PW changes and El Niño-Southern Oscillation-related interannual variations with robust spatial patterns. The reanalysis products, especially the MERRA and JRA-25, roughly capture these EOF modes, which account for over 50% of the total variance. The results show that even during the post-1979 satellite era, discontinuities in radiosonde data can still induce large spurious long-term changes in reanalysis PW and other related fields. Thus, more efforts are needed to remove spurious changes in input data for future long-term reanalyses.
Digitally Enhanced Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Shaddock, Daniel; Ware, Brent; Lay, Oliver; Dubovitsky, Serge
2010-01-01
Spurious interference limits the performance of many interferometric measurements. Digitally enhanced interferometry (DEI) improves measurement sensitivity by augmenting conventional heterodyne interferometry with pseudo-random noise (PRN) code phase modulation. DEI effectively changes the measurement problem from one of hardware (optics, electronics), which may deteriorate over time, to one of software (modulation, digital signal processing), which does not. DEI isolates interferometric signals based on their delay. Interferometric signals are effectively time-tagged by phase-modulating the laser source with a PRN code. DEI improves measurement sensitivity by exploiting the autocorrelation properties of the PRN to isolate only the signal of interest and reject spurious interference. The properties of the PRN code determine the degree of isolation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatano, H.; Watanabe, T.
A new system was developed for the reciprocity calibration of acoustic emission transducers in Rayleigh-wave and longitudinal-wave sound fields. In order to reduce interference from spurious waves due to reflections and mode conversions, a large cylindrical block of forged steel was prepared for the transfer medium, and direct and spurious waves were discriminated between on the basis of their arrival times. Frequency characteristics of velocity sensitivity to both the Rayleigh wave and longitudinal wave were determined in the range of 50 kHz{endash}1 MHz by means of electrical measurements without the use of mechanical sound sources or reference transducers. {copyright} {italmore » 1997 Acoustical Society of America.}« less
Correlating quantum decoherence and material defects in a Josephson qubit
NASA Astrophysics Data System (ADS)
Hite, D. A.; McDermott, R.; Simmonds, R. W.; Cooper, K. B.; Steffen, M.; Nam, S.; Pappas, D. P.; Martinis, J. M.
2004-03-01
Superconducting tunnel junction devices are promising candidates for constructing quantum bits (qubits) for quantum computation because of their inherently low dissipation and ease of scalability by microfabrication. Recently, the Josephson phase qubit has been characterized spectroscopically as having spurious microwave resonators that couple to the qubit and act as a dominant source of decoherence. While the origin of these spurious resonances remains unknown, experimental evidence points to the material system of the tunnel barrier. Here, we focus on our materials research aimed at elucidating and eliminating these spurious resonators. In particular, we have studied the use of high quality Al films epitaxially grown on Si(111) as the base electrode of the tunnel junction. During each step in the Al/AlOx/Al trilayer growth, we have investigated the structure in situ by AES, AED and LEED. While tunnel junctions fabricated with these epitaxial base electrodes prove to be of non-uniform oxide thickness and too thin, I-V characteristics have shown a lowering of subgap currents by a factor of two. Transport measurements will be correlated with morphological structure for a number of devices fabricated with various degrees of crystalline quality.
NASA Astrophysics Data System (ADS)
Garcia-Allende, Pilar Beatriz; Conde, Olga M.; Madruga, Francisco J.; Cubillas, Ana M.; Lopez-Higuera, Jose M.
2008-03-01
A non-intrusive infrared sensor for the detection of spurious elements in an industrial raw material chain has been developed. The system is an extension to the whole near infrared range of the spectrum of a previously designed system based on the Vis-NIR range (400 - 1000 nm). It incorporates a hyperspectral imaging spectrograph able to register simultaneously the NIR reflected spectrum of the material under study along all the points of an image line. The working material has been different tobacco leaf blends mixed with typical spurious elements of this field such as plastics, cardboards, etc. Spurious elements are discriminated automatically by an artificial neural network able to perform the classification with a high degree of accuracy. Due to the high amount of information involved in the process, Principal Component Analysis is first applied to perform data redundancy removal. By means of the extension to the whole NIR range of the spectrum, from 1000 to 2400 nm, the characterization of the material under test is highly improved. The developed technique could be applied to the classification and discrimination of other materials, and, as a consequence of its non-contact operation it is particularly suitable for food quality control.
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.
Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.
Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T
2017-07-01
Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.
Lu, Deyu
2016-08-05
A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less
VizieR Online Data Catalog: VLA-COSMOS 3 GHz Large Project (Smolcic+, 2017)
NASA Astrophysics Data System (ADS)
Smolcic, V.; Novak, M.; Bondi, M.; Ciliegi, P.; Mooley, K. P.; Schinnerer, E.; Zamorani, G.; Navarrete, F.; Bourke, S.; Karim, A.; Vardoulaki, E.; Leslie, S.; Delhaize, J.; Carilli, C. L.; Myers, S. T.; Baran, N.; Delvecchio, I.; Miettinen, O.; Banfield, J.; Balokovic, M.; Bertoldi, F.; Capak, P.; Frail, D. A.; Hallinan, G.; Hao, H.; Herrera Ruiz, N.; Horesh, A.; Ilbert, O.; Intema, H.; Jelic, V.; Klockner, H.-R.; Krpan, J.; Kulkarni, S. R.; McCracken, H.; Laigle, C.; Middleberg, E.; Murphy, E.; Sargent, M.; Scoville, N. Z.; Sheth, K.
2016-10-01
The catalog contains sources selected down to a 5σ(σ~2.3uJy/beam) threshold. This catalog can be used for statistical analyses, accompanied with the corrections given in the data & catalog release paper. All completeness & bias corrections and source counts presented in the paper were calculated using this sample. The total fraction of spurious sources in the COSMOS 2 sq.deg. is below 2.7% within this catalog. However, an increase of spurious sources up to 24% at 5.0=5.5 for single component sources (MULTI=0). The total fraction of spurious sources in the COSMOS 2 sq.deg. within such a selected sample is below 0.4%, and the fraction of spurious sources is below 3% even at the lowest S/N (=5.5). Catalog Notes: 1. Maximum ID is 10966 although there are 10830 sources. Some IDs were removed by joining them into multi-component sources. 2. Peak surface brightness of sources [uJy/beam] is not reported, but can be obtained by multiplying SNR with RMS. 3. High NPIX usually indicates extended or very bright sources. 4. Reported positional errors on resolved and extended sources should be considered lower limits. 5. Multicomponent sources have errors and S/N column values set to -99.0 Additional data information: Catalog date: 21-Mar-2016 Source extractor: BLOBCAT v1.2 (http://blobcat.sourceforge.net/) Observations: 384 hours, VLA, S-band (2-4GHz), A+C array, 192 pointings Imaging software: CASA v4.2.2 (https://casa.nrao.edu/) Imaging algorithm: Multiscale multifrequency synthesis on single pointings Mosaic size: 30000x30000 pixels (3.3 GB) Pixel size: 0.2x0.2 arcsec2 Median rms noise in the COSMOS 2 sq.deg.: 2.3uJy/beam Beam is circular with FWHM=0.75 arcsec Bandwidth-smearing peak correction: 0% (no corrections applied) Resolved criteria: Sint/Speak>1+6*snr^(-1.44) Total area covered: 2.6 sq.deg. (1 data file).
Xu, D Z; Deitch, E A; Sittig, K; Qi, L; McDonald, J C
1988-01-01
Mononuclear cells isolated by density gradient centrifugation from the peripheral blood of burn patients, but not healthy volunteers, are contaminated with large numbers of nonmononuclear cells. These contaminating leukocytes could cause artifactual alterations in standard in vitro tests of lymphocyte function. Thus, we compared the in vitro blastogenic response of density gradient purified leukocytes and T-cell purified lymphocytes from 13 burn patients to mitogenic (PHA) and antigenic stimuli. The mitogenic and antigenic response of the patients' density gradient purified leukocytes were impaired compared to healthy volunteers (p less than 0.01). However, when the contaminating nonlymphocytes were removed, the patients' cells responded normally to both stimuli. Thus, density gradient purified mononuclear cells from burn patients are contaminated by leukocytes that are not phenotypically or functionally lymphocytes. Since the lymphocytes from burn patients respond normally to PHA and alloantigens after the contaminating nonlymphocyte cell population has been removed, it appears that in vitro assays of lymphocyte function using density gradient purified leukocytes may give spurious results. PMID:2973771
Power, Jonathan D; Barnes, Kelly A; Snyder, Abraham Z; Schlaggar, Bradley L; Petersen, Steven E
2011-01-01
Here, we demonstrate that subject motion produces substantial changes in the timecourses of resting state functional connectivity MRI (rs-fcMRI) data despite compensatory spatial registration and regression of motion estimates from the data. These changes cause systematic but spurious correlation structures throughout the brain. Specifically, many long-distance correlations are decreased by subject motion, whereas many short-distance correlations are increased. These changes in rs-fcMRI correlations do not arise from, nor are they adequately countered by, some common functional connectivity processing steps. Two indices of data quality are proposed, and a simple method to reduce motion-related effects in rs-fcMRI analyses is demonstrated that should be flexibly implementable across a variety of software platforms. We demonstrate how application of this technique impacts our own data, modifying previous conclusions about brain development. These results suggest the need for greater care in dealing with subject motion, and the need to critically revisit previous rs-fcMRI work that may not have adequately controlled for effects of transient subject movements. PMID:22019881
Explaining the Relationship between Employment and Juvenile Delinquency.
Staff, Jeremy; Osgood, D Wayne; Schulenberg, John E; Bachman, Jerald G; Messersmith, Emily E
2010-11-28
Most criminological theories predict an inverse relationship between employment and crime, but teenagers' involvement in paid work during the school year is positively correlated with delinquency and substance use. Whether the work-delinquency association is causal or spurious has long been debated. This study estimates the effect of paid work on juvenile delinquency using longitudinal data from the national Monitoring the Future project. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in juvenile delinquency and substance use to those in paid work and other explanatory variables. We also disentangle effects of actual employment from preferences for employment to provide insight about the likely role of time-varying selection factors tied to employment, delinquency, school engagement, and leisure activities. Whereas causal effects of employment would produce differences based on whether and how many hours respondents worked, we found significantly higher rates of crime and substance use among non-employed youth who preferred intensive versus moderate work. Our findings suggest the relationship between high-intensity work and delinquency results from preexisting factors that lead youth to desire varying levels of employment.
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Silbermann, C. B.; Ihlemann, J.
2016-03-01
Continuum Dislocation Theory (CDT) relates gradients of plastic deformation in crystals with the presence of geometrically necessary dislocations. Therefore, the dislocation tensor is introduced as an additional thermodynamic state variable which reflects tensorial properties of dislocation ensembles. Moreover, the CDT captures both the strain energy from the macroscopic deformation of the crystal and the elastic energy of the dislocation network, as well as the dissipation of energy due to dislocation motion. The present contribution deals with the geometrically linear CDT. More precise, the focus is on the role of dislocation kinematics for single and multi-slip and its consequences on the field equations. Thereby, the number of active slip systems plays a crucial role since it restricts the degrees of freedom of plastic deformation. Special attention is put on the definition of proper, well-defined invariants of the dislocation tensor in order to avoid any spurious dependence of the resulting field equations on the coordinate system. It is shown how a slip system based approach can be in accordance with the tensor nature of the involved quantities. At first, only dislocation glide in one active slip system of the crystal is allowed. Then, the special case of two orthogonal (interacting) slip systems is considered and the governing field equations are presented. In addition, the structure and symmetry of the backstress tensor is investigated from the viewpoint of thermodynamical consistency. The results will again be used in order to facilitate the set of field equations and to prepare for a robust numerical implementation.
China’s Currency: Economic Issues and Options for U.S. Trade Policy
2008-05-22
otherwise, the results may represent nothing more than spurious correlation. One rationale is called the “ Balassa - Samuelson ” effect: as countries get richer...the mobility of labor and capital in China may interfere with the Balassa - Samuelson effect.45 Cheung et al. are able to replicate others’ results...overall U.S. trade deficit is unsustainable, and revaluing the yuan would reduce it. This goes beyond an argument that China has fixed the yuan at an
Link prediction in the network of global virtual water trade
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca
2016-04-01
Through the international food-trade, water resources are 'virtually' transferred from the country of production to the country of consumption. The international food-trade, thus, implies a network of virtual water flows from exporting to importing countries (i.e., nodes). Given the dynamical behavior of the network, where food-trade relations (i.e., links) are created and dismissed every year, link prediction becomes a challenge. In this study, we propose a novel methodology for link prediction in the virtual water network. The model aims at identifying the main factors (among 17 different variables) driving the creation of a food-trade relation between any two countries, along the period between 1986 and 2011. Furthermore, the model can be exploited to investigate the network configuration in the future, under different possible (climatic and demographic) scenarios. The model grounds the existence of a link between any two nodes on the link weight (i.e., the virtual water flow): a link exists when the nodes exchange a minimum (fixed) volume of virtual water. Starting from a set of potential links between any two nodes, we fit the associated virtual water flows (both the real and the null ones) by means of multivariate linear regressions. Then, links with estimated flows higher than a minimum value (i.e., threshold) are considered active-links, while the others are non-active ones. The discrimination between active and non-active links through the threshold introduces an error (called link-prediction error) because some real links are lost (i.e., missed links) and some non-existing links (i.e., spurious links) are inevitably introduced in the network. The major drivers are those significantly minimizing the link-prediction error. Once the structure of the unweighted virtual water network is known, we apply, again, linear regressions to assess the major factors driving the fluxes traded along (modelled) active-links. Results indicate that, on the one hand, population and fertilizer use, together with link properties (such as the distance between nodes), are the major factors driving the links creation; on the other hand, population, distance, and gross domestic product are essential to model the flux entity. The results are promising since the model is able to correctly predict the 85% of the 16422 food-trade links (15% are missed), by spuriously adding to the real network only the 5% of non-existing links. The link-prediction error, evaluated as the sum of the percentage of missed and spurious links, is around 20% and it is constant over the study period. Only the 0.01% of the global virtual water flow is traded along missed links and an even lower flow is added by the spurious links (0.003%).
ERIC Educational Resources Information Center
Olivers, Christian N. L.; Hulleman, Johan; Spalek, Thomas; Kawahara, Jun-ichiro; Di Lollo, Vincent
2011-01-01
The attentional blink is the marked deficit in awareness of a 2nd target (T2) when it is presented shortly after the 1st target (T1) in a stream of distractors. When the distractors between T1 and T2 are replaced by even more targets, the attentional blink is reduced or absent, indicating that the attentional blink results from online selection…
Numerical Filtering of Spurious Transients in a Satellite Scanning Radiometer: Application to CERES
NASA Technical Reports Server (NTRS)
Smith, G. Louis; Pandey, D. K.; Lee, Robert B., III; Barkstrom, Bruce R.; Priestley, Kory J.
2002-01-01
The Clouds and Earth Radiant Energy System (CERES) scanning, radiometer was designed to provide high accuracy measurements of the radiances from the earth. Calibration testing of the instruments showed the presence of all undesired slow transient in the measurements of all channels at 1% to 2% of the signal. Analysis of the data showed that the transient consists of a single linear mode. The characteristic time of this mode is 0.3 to 0.4 s and is much greater than that the 8-10-ms response time of the detector, so that it is well separated from the detector response. A numerical filter was designed for the removal of this transient from the measurements. Results show no trace remaining of the transient after application of the numerical filter. The characterization of the slow mode on the basis of ground calibration data is discussed and flight results are shown for the CERES instruments aboard the Tropical Rainfall Measurement Mission and Terra spacecraft. The primary influence of the slow mode is in the calibration of the instrument and the in-flight validation of the calibration. This method may be applicable to other radiometers that are striving for high accuracy and encounter a slow spurious mode regardless of the underlying physics.
Zhang, Shuzeng; Li, Xiongbing; Jeong, Hyunjo; Hu, Hongwei
2018-05-12
Angle beam wedge transducers are widely used in nonlinear Rayleigh wave experiments as they can generate Rayleigh wave easily and produce high intensity nonlinear waves for detection. When such a transducer is used, the spurious harmonics (source nonlinearity) and wave diffraction may occur and will affect the measurement results, so it is essential to fully understand its acoustic nature. This paper experimentally investigates the nonlinear Rayleigh wave beam fields generated and received by angle beam wedge transducers, in which the theoretical predictions are based on the acoustic model developed previously for angle beam wedge transducers [S. Zhang, et al., Wave Motion, 67, 141-159, (2016)]. The source of the spurious harmonics is fully characterized by scrutinizing the nonlinear Rayleigh wave behavior in various materials with different driving voltages. Furthermore, it is shown that the attenuation coefficients for both fundamental and second harmonic Rayleigh waves can be extracted by comparing the measurements with the predictions when the experiments are conducted at many locations along the propagation path. A technique is developed to evaluate the material nonlinearity by making appropriate corrections for source nonlinearity, diffraction and attenuation. The nonlinear parameters of three aluminum alloy specimens - Al 2024, Al 6061 and Al 7075 - are measured, and the results indicate that the measurement results can be significantly improved using the proposed method. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Yonetani, Yoshiteru
2005-04-01
We report that a severe artifact appeared in molecular dynamics simulation of bulk water using the long cut-off length 18 Å. Our result shows that increasing the cut-off length does not always improve the simulation result. Moreover, the use of the long cut-off length can lead to a spurious result. It is suggested that the simulation of solvated biomolecules using such a long cut-off length, which has been often performed, may contain an unexpected artifact.
Haynes, S E
1983-10-01
It is widely known that linear restrictions involve bias. What is not known is that some linear restrictions are especially dangerous for hypothesis testing. For some, the expected value of the restricted coefficient does not lie between (among) the true unconstrained coefficients, which implies that the estimate is not a simple average of these coefficients. In this paper, the danger is examined regarding the additive linear restriction almost universally imposed in statistical research--the restriction of symmetry. Symmetry implies that the response of the dependent variable to a unit decrease in an expanatory variable is identical, but of opposite sign, to the response to a unit increase. The 1st section of the paper demonstrates theoretically that a coefficient restricted by symmetry (unlike coefficients embodying other additive restrictions) is not a simple average of the unconstrained coefficients because the relevant interacted variables are inversly correlated by definition. The next section shows that, under the restriction of symmetry, fertility in Finland from 1885-1925 appears to respond in a prolonged manner to infant mortality (significant and positive with a lag of 4-6 years), suggesting a response to expected deaths. However, unscontrained estimates indicate that this finding is spurious. When the restriction is relaxed, the dominant response is rapid (significant and positive with a lag of 1-2 years) and stronger for declines in mortality, supporting an aymmetric response to actual deaths. For 2 reasons, the danger of the symmetry restriction may be especially pervasive. 1st, unlike most other linear constraints, symmetry is passively imposed merely by ignoring the possibility of asymmetry. 2nd, modles in a wide range of fields--including macroeconomics (e.g., demand for money, consumption, and investment models, and the Phillips curve), international economics (e.g., intervention models of central banks), and labor economics (e.g., sticky wage models)--predict asymmetry. The conclusion of the study is that, to avoid spurious hypothesis testing, empirical research should systematically test for asymmetry, especially when predicted by theory.
Achour, Brahim; Dantonio, Alyssa; Niosi, Mark; Novak, Jonathan J; Fallon, John K; Barber, Jill; Smith, Philip C; Rostami-Hodjegan, Amin; Goosen, Theunis C
2017-10-01
Quantitative characterization of UDP-glucuronosyltransferase (UGT) enzymes is valuable in glucuronidation reaction phenotyping, predicting metabolic clearance and drug-drug interactions using extrapolation exercises based on pharmacokinetic modeling. Different quantitative proteomic workflows have been employed to quantify UGT enzymes in various systems, with reports indicating large variability in expression, which cannot be explained by interindividual variability alone. To evaluate the effect of methodological differences on end-point UGT abundance quantification, eight UGT enzymes were quantified in 24 matched liver microsomal samples by two laboratories using stable isotope-labeled (SIL) peptides or quantitative concatemer (QconCAT) standard, and measurements were assessed against catalytic activity in seven enzymes ( n = 59). There was little agreement between individual abundance levels reported by the two methods; only UGT1A1 showed strong correlation [Spearman rank order correlation (Rs) = 0.73, P < 0.0001; R 2 = 0.30; n = 24]. SIL-based abundance measurements correlated well with enzyme activities, with correlations ranging from moderate for UGTs 1A6, 1A9, and 2B15 (Rs = 0.52-0.59, P < 0.0001; R 2 = 0.34-0.58; n = 59) to strong correlations for UGTs 1A1, 1A3, 1A4, and 2B7 (Rs = 0.79-0.90, P < 0.0001; R 2 = 0.69-0.79). QconCAT-based data revealed generally poor correlation with activity, whereas moderate correlations were shown for UGTs 1A1, 1A3, and 2B7. Spurious abundance-activity correlations were identified in the cases of UGT1A4/2B4 and UGT2B7/2B15, which could be explained by correlations of protein expression between these enzymes. Consistent correlation of UGT abundance with catalytic activity, demonstrated by the SIL-based dataset, suggests that quantitative proteomic data should be validated against catalytic activity whenever possible. In addition, metabolic reaction phenotyping exercises should consider spurious abundance-activity correlations to avoid misleading conclusions. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
NASA Astrophysics Data System (ADS)
Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze
2014-11-01
In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.
NASA Astrophysics Data System (ADS)
Kawana, Kojiro; Tanikawa, Ataru; Yoshida, Naoki
2018-03-01
We run a suite of hydrodynamics simulations of tidal disruption events (TDEs) of a white dwarf (WD) by a black hole (BH) with a wide range of WD/BH masses and orbital parameters. We implement nuclear reactions to study nucleosynthesis and its dynamical effect through release of nuclear energy. The released nuclear energy effectively increases the fraction of unbound ejecta. This effect is weaker for a heavy WD with 1.2 M⊙, because the specific orbital energy distribution of the debris is predominantly determined by the tidal force, rather than by the explosive reactions. The elemental yield of a TDE depends critically on the initial composition of a WD, while the BH mass and the orbital parameters also affect the total amount of synthesized elements. Tanikawa et al. (2017) find that simulations of WD-BH TDEs with low resolution suffer from spurious heating and inaccurate nuclear reaction results. In order to examine the validity of our calculations, we compare the amounts of the synthesized elements with the upper limits of them derived in a way where we can avoid uncertainties due to low resolution. The results are largely consistent, and thus support our findings. We find particular TDEs where early self-intersection of a WD occurs during the first pericenter passage, promoting formation of an accretion disk. We expect that relativistic jets and/or winds would form in these cases because accretion rates would be super-Eddington. The WD-BH TDEs result in a variety of events depending on the WD/BH mass and pericenter radius of the orbit.
NASA Astrophysics Data System (ADS)
Kawana, Kojiro; Tanikawa, Ataru; Yoshida, Naoki
2018-07-01
We run a suite of hydrodynamic simulations of tidal disruption events (TDEs) of a white dwarf (WD) by a black hole (BH) with a wide range of WD/BH masses and orbital parameters. We implement nuclear reactions to study nucleosynthesis and its dynamical effect through release of nuclear energy. The released nuclear energy effectively increases the fraction of unbound ejecta. This effect is weaker for a heavy WD with 1.2 M⊙, because the specific orbital energy distribution of the debris is predominantly determined by the tidal force, rather than by the explosive reactions. The elemental yield of a TDE depends critically on the initial composition of a WD, while the BH mass and the orbital parameters also affect the total amount of synthesized elements. Tanikawa et al. (2017) find that simulations of WD-BH TDEs with low resolution suffer from spurious heating and inaccurate nuclear reaction results. In order to examine the validity of our calculations, we compare the amounts of the synthesized elements with the upper limits of them derived in a way where we can avoid uncertainties due to low resolution. The results are largely consistent, and thus support our findings. We find particular TDEs where early self-intersection of a WD occurs during the first pericentre passage, promoting formation of an accretion disc. We expect that relativistic jets and/or winds would form in these cases because accretion rates would be super-Eddington. The WD-BH TDEs result in a variety of events depending on the WD/BH mass and pericentre radius of the orbit.
Origin of Low-Energy Spurious Peaks in Spectroscopic Measurements With Silicon Detectors
Giacomini, Gabriele; Huber, Alan; Redus, Robert; ...
2017-11-13
We report that when an uncollimated radioactive X-ray source illuminates a silicon PIN sensor, some ionizing events are generated in the nonimplanted gap between the active area of the sensor and the guard rings (GRs). Carriers can be collected by floating electrodes, i.e., electron accumulation layers at the silicon/oxide interface, and floating GRs. The crosstalk signals generated by these events create spurious peaks, replicas of the main peaks at either lower amplitude or of opposite polarity. Lastly, we explain this phenomenon as crosstalk caused by charge collected on these floating electrodes, which can be analyzed by means of an extensionmore » of Ramo theorem.« less
An efficient method to compute spurious end point contributions in PO solutions. [Physical Optics
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.; Pistorius, Carl W. I.
1987-01-01
A method is given to compute the spurious endpoint contributions in the physical optics solution for electromagnetic scattering from conducting bodies. The method is applicable to general three-dimensional structures. The only information required to use the method is the radius of curvature of the body at the shadow boundary. Thus, the method is very efficient for numerical computations. As an illustration, the method is applied to several bodies of revolution to compute the endpoint contributions for backscattering in the case of axial incidence. It is shown that in high-frequency situations, the endpoint contributions obtained using the method are equal to the true endpoint contributions.
Are the low-lying isovector 1 + states scissors vibrations?
NASA Astrophysics Data System (ADS)
Faessler, A.
At the Technische Hochschule in Darmstadt the group of Richter and coworkers found in 1983/84 in deformed rare earth nuclei low-lying isovector 1 + states. Such states have been predicted in the generalized Bohr-Mottelson model and in the interacting boson model no. 2 (IBA2). In the generalized Bohr-Mottelson model one allows for proton and neutron quadrupole deformations separately. If one includes only static proton and neutron deformations the generalized Bohr-Mottelson model reduces to the two rotor model. It describes the excitation energy of these states in good agreement with the data but overestimates the magnetic dipole transition probabilities by a factor 5. In the interacting boson model (IBA2) where only the outermost nucleons participate in the excitation the magnetic dipole transition probability is only overestimated by a factor 2. The too large collectivity in both models results from the fact that they concentrate the whole strength of the scissors vibrations into one state. A microscopic description is needed to describe the spreading of the scissors strength over several states. For a microscopic determination of these scissors states one uses the Quasi-particle Random Phase Approximation (QRPA). But this approach has a serious difficulty. Since one rotates for the calculation the nucleus into the intrinsic system the state corresponding to the rotation of the whole nucleus is a spurious state. The usual procedure to remove this spuriosity is to use the Thouless theorem which says that a spurious state created by an operator which commutes with the total hamiltonian (here the total angular momentum, corresponding to a rotation of the whole system) produces the spurious state if applied to the ground state. It says further the energy of this spurious state lies at zero excitation energy (it is degenerate with the ground state) and is orthogonal to all physical states. Thus the usual approach is to vary the quadrupole-quadrupole force strength so that a state lies at zero excitation energy and to identify that with the spuríous state. This procedure assumes that a total angular momentum commutes with a total hamiltonian. But this is not the case since the total hamiltonian contains a deformed Saxon-Woods potential. Thus one has to take care explicitly that the spurious state is removed. This we do in our approach by introducing Lagrange multipliers for each excited states and requesting that these states are orthogonal to the spurious state which is explicitly constructed by applying the total angular momentum operator to the ground state. To reduce the number of free parameters in the hamiltonian we take the Saxon-Woods potential for the deformed nuclei from the literature (with minor adjustments) and determine the proton-proton, neutron-neutron and the proton-neutron quadrupole force constant by requesting that the hamiltonian commutes with the total angular momentum in the (QRPA) ground state. This yields equations fixing all three coupling constants for the quadrupole-quadrupole force allowing even for isospin symmetry violation. The spin-spin force is taken from the Reid soft core potential. A possible spin-quadrupole force has been taken from the work of Soloviev but it turns out that this is not important. The calculation shows that the strength of the scissors vibrations are spread over many states. The main 1 + state at around 3 MeV has an overlap of the order of 14 % of the scissors state. 50% of that state are spread over the physical states up to an excitation energy of 6 MeV. The rest is distributed over higher lying states. The expectation value of the many-body hamiltonian in the scissors vibrational state shows roughly an excitation energy of 7 MeV above the ground state. The results also support the experimental findings that these states are mainly orbital excitations. States are not very collective. Normally only a proton and neutron particle-hole pair are with a large amplitude participating in forming these states. But those protons and neutrons which are excited perform scissors type vibrations.
NASA Astrophysics Data System (ADS)
Miyake, Y.; Cully, C. M.; Usui, H.; Nakashima, H.
2013-12-01
In order to increase accuracy and reliability of in-situ measurements made by scientific spacecraft, it is imperative to develop comprehensive understanding of spacecraft-plasma interactions. In space environments, not only the spacecraft charging but also surrounding plasma disturbances such as caused by the wake formation may interfere directly with in-situ measurements. The self-consistent solutions of such phenomena are necessary to assess their effects on scientific spacecraft systems. As our recent activity, we work on the modeling and simulations of Cluster double-probe instrument in tenuous and cold streaming plasmas [1]. Double-probe electric field sensors are often deployed using wire booms with radii much less than typical Debye lengths of magnetospheric plasmas (millimeters compared to tens of meters). However, in tenuous and cold streaming plasmas seen in the polar cap and lobe regions, the wire booms have a high positive potential due to photoelectron emission and can strongly scatter approaching ions. Consequently, an electrostatic wake formed behind the spacecraft is further enhanced by the presence of the wire booms. We reproduce this process for the case of the Cluster satellite by performing plasma particle-in-cell (PIC) simulations [2], which include the effects of both the spacecraft body and the wire booms in a simultaneous manner, on modern supercomputers. The simulations reveal that the effective thickness of the booms for the Cluster Electric Field and Wave (EFW) instrument is magnified from its real thickness (2.2 millimeters) to several meters, when the spacecraft potential is at 30-40 volts. Such booms enhance the wake electric field magnitude by a factor of about 2 depending on the spacecraft potential, and play a principal role in explaining the in situ Cluster EFW data showing sinusoidal spurious electric fields of about 10 mV/m amplitudes. The boom effects are quantified by comparing PIC simulations with and without wire booms. The paper also reports some recent progress of ongoing PIC simulation research that focuses on spurious electric field generation in subsonic ion flows. Our preliminary simulation results revealed that; (1) there is no apparent wake signature behind the spacecraft in such a condition, but (2) spurious electric field over 1 mV/m amplitude is observed in the direction of the flow vector. The observed field amplitude is sometimes comparable to the convection electric field (a few mV/m) associated with the flow. Our analysis also confirmed that the spurious field is caused by a weakly-asymmetric potential pattern created by the ion flow. We will present the parametric study of such spurious fields for various conditions of plasma flows. [References] [1] Miyake, Y., C. M. Cully, H. Usui, and H. Nakashima (2013), Plasma particle simulations of wake formation behind a spacecraft with thin wire booms, submitted to J. Geophys. Res. [2] Miyake, Y., and H. Usui (2009), New electromagnetic particle simulation code for the analysis of spacecraft-plasma interactions, Phys. Plasmas, 16, 062904, doi:10.1063/1.3147922.
NASA Astrophysics Data System (ADS)
Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui
2013-04-01
A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.
de Lecuona, Itziar
2018-05-31
The current model for reviewing research with human beings basically depends on decision-making processes within research ethics committees. These committees must be aware of the importance of the new digital paradigm based on the large-scale exploitation of datasets, including personal data on health. This article offers guidelines, with the application of the EU's General Data Protection Regulation, for the appropriate evaluation of projects that are based on the use of big data analytics in healthcare. The processes for gathering and using this data constitute a niche where current research is developed. In this context, the existing protocols for obtaining informed consent from participants are outdated, as they are based not only on the assumption that personal data are anonymized, but that they will continue to be so in the future. As a result, it is essential that research ethics committees take on new capabilities and revisit values such as privacy and freedom, updating protocols, methodologies and working procedures. This change in the work culture will provide legal security to the personnel involved in research, will make it possible to guarantee the protection of the privacy of the subjects of the data, and will permit orienting the exploitation of data to avoid the commodification of personal data in this era of deidentification, so that research meets actual social needs and not spurious or opportunistic interests disguised as research. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Palatini, Paolo; Rosei, Enrico Agabiti; Avolio, Alberto; Bilo, Gregorz; Casiglia, Edoardo; Ghiadoni, Lorenzo; Giannattasio, Cristina; Grassi, Guido; Jelakovich, Bojan; Julius, Stevo; Mancia, Giuseppe; McEniery, Carmel M; O'Rourke, Michael F; Parati, Gianfranco; Pauletto, Paolo; Pucci, Giacomo; Saladini, Francesca; Strazzullo, Pasquale; Tsioufis, Konstantinos; Wilkinson, Ian B; Zanchetti, Alberto
2018-06-01
: Whether isolated systolic hypertension in the young (ISHY) implies a worse outcome and needs antihypertensive treatment is still a matter for dispute. ISHY is thought to have different mechanisms than systolic hypertension in the elderly. However, findings from previous studies have provided inconsistent results. From the analysis of the literature, two main lines of research and conceptualization have emerged. Simultaneous assessment of peripheral and central blood pressure led to the identification of a condition called pseudo or spurious hypertension, which was considered an innocent condition. However, an increase in pulse wave velocity has been found by some authors in about 20% of the individuals with ISHY. In addition, obesity and metabolic disturbances have often been documented to be associated with ISHY both in children and young adults. The first aspect to consider whenever evaluating a person with ISHY is the possible presence of white-coat hypertension, which has been frequently found in this condition. In addition, assessment of central blood pressure is useful for identifying ISHY patients whose central blood pressure is normal. ISHY is infrequently mentioned in the guidelines on diagnosis and treatment of hypertension. According to the 2013 European Guidelines on the management of hypertension, people with ISHY should be followed carefully, modifying risk factors by lifestyle changes and avoiding antihypertensive drugs. Only future clinical trials will elucidate if a benefit can be achieved with pharmacological treatment in some subgroups of ISHY patients with associated risk factors and/or high central blood pressure.
[Evidence-based medicine: modern scientific methods for determining usefulness].
Schmidt, J G
1999-01-01
For quite some time, clinical epidemiology has introduced the art of critical appraisal of evidence as well as the methods of how to design sound clinical studies and trials. Almost unnoticed by most medical institutions a new hierarchy of evidence has emerged which puts well thought out trials, able to document unbiased treatment benefit in terms of patient suffering, above pathophysiological theory. Many controlled trials have shown, in the meantime, that the control of laboratory or other kind of pathologies and the correction of anatomical abnormalities do not necessarily mean a benefit for the patient. Concepts relating to this dissection of evidence include: Surrogate fallacy ("cosmetics" of laboratory results or ligament or cartilage "cosmetics" in surgery), confounding (spurious causal relationships), selection bias (comparison with selected groups) as well as lead-time bias (mistaking earlier diagnosis as increase of survival), length bias (overlooking differences in the aggressiveness of diseases as determinants of disease stage distributions) and overdiagnosis bias (mistaking the increasing detection of clinically silent pathologies as improvement of prognosis). Moreover, absolute instead of relative risk reduction needs to be used to measure patient benefit. The incorporation of decision-analysis and of the concepts or clinical epidemiology will improve the efficiency and quality of medicine much more effectively than the sole focus on technical medical performance. Evidence based medicine is the systematic and critical appraisal of medical interventions, based on the understanding how to avoid the fallacies and biases mentioned.
Multiple hypothesis tracking for the cyber domain
NASA Astrophysics Data System (ADS)
Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.
2011-09-01
This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.
Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou
2017-03-15
High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.
Harkness, Kate L; Monroe, Scott M
2016-07-01
Life stress is a central factor in the onset and course of a wide range of medical and psychiatric conditions. Determining the precise etiological and pathological consequences of stress, though, has been hindered by weaknesses in prevailing definitional and measurement practices. The purpose of the current paper is to evaluate the primary strategies for defining and measuring major and minor acute life events, chronic stressors, and daily hassles as informed by 3 basic scientific premises. The first premise concerns the manner in which stress is conceptualized and operationally defined, and specifically we assert that stress measures must not conflate the stress exposure with the stress response. The second premise concerns how stress exposures are measured, and we provide guidelines for optimizing standardized and sensitive indicators of life stress. The third premise addresses the consequences of variations in the procedures for life event measurement with regard to the validity of the research designs employed. We show that life stress measures are susceptible to several sources of bias, and if these potential sources of bias are not controlled in the design of the research, spurious findings may result. Our goal is to provide a useful guide for researchers who consider life stress to be an important factor in their theoretical models of disease, wish to incorporate measures of life stress in their research, and seek to avoid the common pitfalls of past measurement practices. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Artifactual responses when recording auditory steady-state responses.
Small, Susan A; Stapells, David R
2004-12-01
The goal of this study was to investigate, in hearing-impaired participants who could not hear the stimuli, the possibility of artifactual auditory steady-state responses (ASSRs) when stimuli are presented at high intensities. ASSRs to single (60 dB HL) and multiple (20 to 50 dB HL; 500 to 4000 Hz) bone-conduction stimuli as well as single 114 to 120 dB HL air-conduction stimuli, were obtained using the Rotman MASTER system, using analog-to-digital (A/D) conversion rates of 500, 1000, and 1250 Hz. Responses (p < 0.05) were considered artifactual when their numbers exceeded that expected by chance. In some conditions, we also obtained ASSRs to "alternated" stimuli (stimuli inverted and ASSRs to the two polarities averaged). A total of 17 subjects were tested. Bone conduction results: 500 Hz A/D rate: Large-amplitude (43 to 1558 nV) artifactual ASSRs were seen at 40 and 50 dB HL for the 500 Hz carrier frequency. Smaller responses (28 to 53 nV) were also recorded at 20 dB HL for the 500 Hz carrier frequency. Artifactual ASSRs (17 to 62 nV) were seen at 40 dB HL and above for the 1000 Hz carrier frequency and at 50 dB HL for the 2000 Hz carrier frequency. Alternating the stimulus polarity decreased the amplitude and occurrence of these artifactual responses but did not eliminate responses for the 500 Hz carrier frequency at 40 dB HL and above. No artifactual responses were recorded for 4000 Hz stimuli for any condition. 1000 Hz A/D rate: Artifactual ASSRs (15 to 523 nV) were seen at 50 dB HL and above for the 500 Hz carrier frequency and 40 dB HL and above for the 1000 Hz carrier frequency. Artifactual responses were also obtained at 50 dB HL for a 2000 Hz carrier frequency but not at lower levels. Artifactual responses were not seen for the 4000 Hz carrier frequency. Alternating the stimulus polarity removed the responses for the 1000 and 2000 Hz carrier frequencies but did not change the results for the 500 Hz carrier frequency. 1250 Hz A/D rate: Artifactual ASSRs (16 to 220 nV) were seen at 50 dB HL and above for the 500 Hz carrier frequency and 60 dB HL and above for the 1000 Hz carrier frequency. Alternating the stimulus polarity removed the responses for the 1000 Hz carrier frequency but did not change the results for the 500 Hz carrier frequency. There were no artifactual responses at 2000 and 4000 Hz. Air conduction results: 500 Hz A/D rate: Artifactual ASSRs (49 to 153 nV) were seen for 114 to 120 dB HL stimuli for 500 and 1000 Hz carrier frequencies. Alternating the stimulus polarity removed these responses. There were no artifactual responses at 2000 and 4000 Hz. 1000 and 1250 Hz A/D rates: Artifactual ASSRs (19 to 55 nV) were seen for a 120 dB HL stimulus for a 1000 Hz carrier. Alternating the stimulus polarity removed these responses. High-intensity air- or bone-conduction stimuli can produce spurious ASSRs, especially for 500 and 1000 Hz carrier frequencies. High-amplitude stimulus artifact can result in energy that is aliased to exactly the modulation frequency. Choice of signal conditioning (electroencephalogram filter slope and low-pass cutoff) and processing (A/D rate) can avoid spurious responses due to aliasing. However, artifactual responses due to other causes may still occur for bone-conduction stimuli 50 dB HL and higher (and possibly for high-level air conduction). Because the phases of these spurious responses do not invert with inversion of stimulus, the possibility of nonauditory physiologic responses cannot be ruled out. The clinical implications of these results are that artifactual responses may occur for any patient for bone-conduction stimuli at levels greater than 40 dB HL and for high-intensity air-conduction stimuli used to assess patients with profound hearing loss.
Vincenti, H.; Vay, J. -L.
2015-11-22
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
NASA Astrophysics Data System (ADS)
De Filippis, G.; Noël, J. P.; Kerschen, G.; Soria, L.; Stephan, C.
2017-09-01
The introduction of the frequency-domain nonlinear subspace identification (FNSI) method in 2013 constitutes one in a series of recent attempts toward developing a realistic, first-generation framework applicable to complex structures. If this method showed promising capabilities when applied to academic structures, it is still confronted with a number of limitations which needs to be addressed. In particular, the removal of nonphysical poles in the identified nonlinear models is a distinct challenge. In the present paper, it is proposed as a first contribution to operate directly on the identified state-space matrices to carry out spurious pole removal. A modal-space decomposition of the state and output matrices is examined to discriminate genuine from numerical poles, prior to estimating the extended input and feedthrough matrices. The final state-space model thus contains physical information only and naturally leads to nonlinear coefficients free of spurious variations. Besides spurious variations due to nonphysical poles, vibration modes lying outside the frequency band of interest may also produce drifts of the nonlinear coefficients. The second contribution of the paper is to include residual terms, accounting for the existence of these modes. The proposed improved FNSI methodology is validated numerically and experimentally using a full-scale structure, the Morane-Saulnier Paris aircraft.
An agile frequency synthesizer/RF generator for the SCAMP terminal
NASA Astrophysics Data System (ADS)
Wolfson, Harry M.
1992-09-01
This report describes a combination agile synthesizer and reference frequency generator called the RF Generator, which was developed for use in the Advanced SCAMP (ASCAMP) program. The ASCAMP is a hand-carried, battery-powered, man-portable ground terminal that is being developed for EHF satellite communications. In order to successfully achieve a truly portable terminal, all of the subsystems and components in ASCAMP were designed with the following critical goals: low power, lightweight, and small size. The RF Generator is based on a hybrid design approach of direct digital and direct analog synthesis techniques that was optimized for small size, low power consumption, fast tuning, low spurious, and low phase noise. The RF Generator was conceived with the philosophy that simplicity of design would lead to a synthesizer that differentiates itself from those used in the past by its ease of fabrication and tuning. By avoiding more complex design approaches, namely, indirect analog (phase lock loops), a more easily produceable design could be achieved. An effort was made to minimize the amount of circuitry in the RF Generator, thereby making trade-offs in performance versus complexity and parts count when it was appropriate.
Melnick, Ronald L; Ward, Jerrold M; Huff, James
2013-01-01
Evidence from studies in animals is essential for identifying chemicals likely to cause or contribute to many diseases in humans, including cancers. Yet, to avoid or delay the implementation of protective public health standards, the chemical industry typically denies cancer causation by agents they produce. The spurious arguments put forward to discount human relevance are often based on inadequately tested hypotheses or modes of action that fail to meet Bradford Hill criteria for causation. We term the industry attacks on the relevance of animal cancer findings as the "War on Carcinogens." Unfortunately, this tactic has been effective in preventing timely and appropriate health protective actions on many economically important yet carcinogenic chemicals, including: arsenic, asbestos, benzene, 1,3-butadiene, formaldehyde, methylene chloride, phthalates, tobacco usage, trichloroethylene [TCE], and others. Recent examples of the "War on Carcinogens" are chemicals causing kidney cancer in animals. Industry consultants argue that kidney tumor findings in rats with exacerbated chronic progressive nephropathy (CPN) are not relevant to humans exposed to these chemicals. We dispute and dismiss this unsubstantiated claim with data and facts, and divulge unprofessional actions from a leading toxicology journal.
On-the-fly reduction of open loops
NASA Astrophysics Data System (ADS)
Buccioni, Federico; Pozzorini, Stefano; Zoller, Max
2018-01-01
Building on the open-loop algorithm we introduce a new method for the automated construction of one-loop amplitudes and their reduction to scalar integrals. The key idea is that the factorisation of one-loop integrands in a product of loop segments makes it possible to perform various operations on-the-fly while constructing the integrand. Reducing the integrand on-the-fly, after each segment multiplication, the construction of loop diagrams and their reduction are unified in a single numerical recursion. In this way we entirely avoid objects with high tensor rank, thereby reducing the complexity of the calculations in a drastic way. Thanks to the on-the-fly approach, which is applied also to helicity summation and for the merging of different diagrams, the speed of the original open-loop algorithm can be further augmented in a very significant way. Moreover, addressing spurious singularities of the employed reduction identities by means of simple expansions in rank-two Gram determinants, we achieve a remarkably high level of numerical stability. These features of the new algorithm, which will be made publicly available in a forthcoming release of the OpenLoops program, are particularly attractive for NLO multi-leg and NNLO real-virtual calculations.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)
2001-01-01
A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.
Mass-corrections for the conservative coupling of flow and transport on collocated meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich
2016-01-15
Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less
Estimating and comparing microbial diversity in the presence of sequencing errors
Chiu, Chun-Huo
2016-01-01
Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872
Modeling Storm Surges Using Discontinuous Galerkin Methods
2016-06-01
devastating impact on coastlines throughout the United States. In order to accurately understand the impacts of storm surges there needs to be an effective ...model. One of the governing systems of equations used to model storm surges’ effects is the Shallow Water Equations (SWE). In this thesis, we solve the...closer to the shoreline. In our simulation, we also learned of the effects spurious waves can have on the results. Due to boundary conditions, a
Radar echo processing with partitioned de-ramp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubbert, Dale F.; Tise, Bertice L.
2013-03-19
The spurious-free dynamic range of a wideband radar system is increased by apportioning de-ramp processing across analog and digital processing domains. A chirp rate offset is applied between the received waveform and the reference waveform that is used for downconversion to the intermediate frequency (IF) range. The chirp rate offset results in a residual chirp in the IF signal prior to digitization. After digitization, the residual IF chirp is removed with digital signal processing.
Pulsation Properties of Carbon and Oxygen Red Giants
NASA Astrophysics Data System (ADS)
Percy, J. R.; Huang, D. J.
2015-07-01
We have used up to 12 decades of AAVSO visual observations, and the AAVSO VSTAR software package to determine new and/or improved periods of 5 pulsating biperiodic carbon (C-type) red giants, and 12 pulsating biperiodic oxygen (M-type) red giants. We have also determined improved periods for 43 additional C-type red giants, in part to search for more biperiodic C-type stars, and also for 46 M-type red giants. For a small sample of the biperiodic C-type and M-type stars, we have used wavelet analysis to determine the time scales of the cycles of amplitude increase and decrease. The C-type and M-type stars do not differ significantly in their period ratios (first overtone to fundamental). There is a marginal difference in the lengths of their amplitude cycles. The most important result of this study is that, because of the semiregularity of these stars, and the presence of alias, harmonic, and spurious periods, the periods which we and others derive for these stars—especially the smaller-amplitude ones—must be determined and interpreted with great care and caution. For instance: spurious periods of a year can produce an apparent excess of stars, at that period, in the period distribution.
Suppressing Ghost Diffraction in E-Beam-Written Gratings
NASA Technical Reports Server (NTRS)
Wilson, Daniel; Backlund, Johan
2009-01-01
A modified scheme for electron-beam (E-beam) writing used in the fabrication of convex or concave diffraction gratings makes it possible to suppress the ghost diffraction heretofore exhibited by such gratings. Ghost diffraction is a spurious component of diffraction caused by a spurious component of grating periodicity as described below. The ghost diffraction orders appear between the main diffraction orders and are typically more intense than is the diffuse scattering from the grating. At such high intensity, ghost diffraction is the dominant source of degradation of grating performance. The pattern of a convex or concave grating is established by electron-beam writing in a resist material coating a substrate that has the desired convex or concave shape. Unfortunately, as a result of the characteristics of electrostatic deflectors used to control the electron beam, it is possible to expose only a small field - typically between 0.5 and 1.0 mm wide - at a given fixed position of the electron gun relative to the substrate. To make a grating larger than the field size, it is necessary to move the substrate to make it possible to write fields centered at different positions, so that the larger area is synthesized by "stitching" the exposed fields.
ADOLESCENT WORK INTENSITY, SCHOOL PERFORMANCE, AND ACADEMIC ENGAGEMENT.
Staff, Jeremy; Schulenberg, John E; Bachman, Jerald G
2010-07-01
Teenagers working over 20 hours per week perform worse in school than youth who work less. There are two competing explanations for this association: (1) that paid work takes time and effort away from activities that promote achievement, such as completing homework, preparing for examinations, getting help from parents and teachers, and participating in extracurricular activities; and (2) that the relationship between paid work and school performance is spurious, reflecting preexisting differences between students in academic ability, motivation, and school commitment. Using longitudinal data from the ongoing national Monitoring the Future project, this research examines the impact of teenage employment on school performance and academic engagement during the 8th, 10th, and 12th grades. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in paid work to changes in school performance and other school-related measures. Unlike prior research, we also compare youth school performance and academic orientation when they are actually working in high-intensity jobs to when they are jobless and wish to work intensively. Results indicate that the mere wish for intensive work corresponds with academic difficulties in a manner similar to actual intensive work.
Energy Models for One-Carrier Transport in Semiconductor Devices
NASA Technical Reports Server (NTRS)
Jerome, Joseph W.; Shu, Chi-Wang
1991-01-01
Moment models of carrier transport, derived from the Boltzmann equation, made possible the simulation of certain key effects through such realistic assumptions as energy dependent mobility functions. This type of global dependence permits the observation of velocity overshoot in the vicinity of device junctions, not discerned via classical drift-diffusion models, which are primarily local in nature. It was found that a critical role is played in the hydrodynamic model by the heat conduction term. When ignored, the overshoot is inappropriately damped. When the standard choice of the Wiedemann-Franz law is made for the conductivity, spurious overshoot is observed. Agreement with Monte-Carlo simulation in this regime required empirical modification of this law, or nonstandard choices. Simulations of the hydrodynamic model in one and two dimensions, as well as simulations of a newly developed energy model, the RT model, are presented. The RT model, intermediate between the hydrodynamic and drift-diffusion model, was developed to eliminate the parabolic energy band and Maxwellian distribution assumptions, and to reduce the spurious overshoot with physically consistent assumptions. The algorithms employed for both models are the essentially non-oscillatory shock capturing algorithms. Some mathematical results are presented and contrasted with the highly developed state of the drift-diffusion model.
Bagarinao, Epifanio; Tsuzuki, Erina; Yoshida, Yukina; Ozawa, Yohei; Kuzuya, Maki; Otani, Takashi; Koyama, Shuji; Isoda, Haruo; Watanabe, Hirohisa; Maesawa, Satoshi; Naganawa, Shinji; Sobue, Gen
2018-01-01
The stability of the MRI scanner throughout a given study is critical in minimizing hardware-induced variability in the acquired imaging data set. However, MRI scanners do malfunction at times, which could generate image artifacts and would require the replacement of a major component such as its gradient coil. In this article, we examined the effect of low intensity, randomly occurring hardware-related noise due to a faulty gradient coil on brain morphometric measures derived from T1-weighted images and resting state networks (RSNs) constructed from resting state functional MRI. We also introduced a method to detect and minimize the effect of the noise associated with a faulty gradient coil. Finally, we assessed the reproducibility of these morphometric measures and RSNs before and after gradient coil replacement. Our results showed that gradient coil noise, even at relatively low intensities, could introduce a large number of voxels exhibiting spurious significant connectivity changes in several RSNs. However, censoring the affected volumes during the analysis could minimize, if not completely eliminate, these spurious connectivity changes and could lead to reproducible RSNs even after gradient coil replacement.
Bagarinao, Epifanio; Tsuzuki, Erina; Yoshida, Yukina; Ozawa, Yohei; Kuzuya, Maki; Otani, Takashi; Koyama, Shuji; Isoda, Haruo; Watanabe, Hirohisa; Maesawa, Satoshi; Naganawa, Shinji; Sobue, Gen
2018-01-01
The stability of the MRI scanner throughout a given study is critical in minimizing hardware-induced variability in the acquired imaging data set. However, MRI scanners do malfunction at times, which could generate image artifacts and would require the replacement of a major component such as its gradient coil. In this article, we examined the effect of low intensity, randomly occurring hardware-related noise due to a faulty gradient coil on brain morphometric measures derived from T1-weighted images and resting state networks (RSNs) constructed from resting state functional MRI. We also introduced a method to detect and minimize the effect of the noise associated with a faulty gradient coil. Finally, we assessed the reproducibility of these morphometric measures and RSNs before and after gradient coil replacement. Our results showed that gradient coil noise, even at relatively low intensities, could introduce a large number of voxels exhibiting spurious significant connectivity changes in several RSNs. However, censoring the affected volumes during the analysis could minimize, if not completely eliminate, these spurious connectivity changes and could lead to reproducible RSNs even after gradient coil replacement. PMID:29725294
Long-memory and the sea level-temperature relationship: a fractional cointegration approach.
Ventosa-Santaulària, Daniel; Heres, David R; Martínez-Hernández, L Catalina
2014-01-01
Through thermal expansion of oceans and melting of land-based ice, global warming is very likely contributing to the sea level rise observed during the 20th century. The amount by which further increases in global average temperature could affect sea level is only known with large uncertainties due to the limited capacity of physics-based models to predict sea levels from global surface temperatures. Semi-empirical approaches have been implemented to estimate the statistical relationship between these two variables providing an alternative measure on which to base potentially disrupting impacts on coastal communities and ecosystems. However, only a few of these semi-empirical applications had addressed the spurious inference that is likely to be drawn when one nonstationary process is regressed on another. Furthermore, it has been shown that spurious effects are not eliminated by stationary processes when these possess strong long memory. Our results indicate that both global temperature and sea level indeed present the characteristics of long memory processes. Nevertheless, we find that these variables are fractionally cointegrated when sea-ice extent is incorporated as an instrumental variable for temperature which in our estimations has a statistically significant positive impact on global sea level.
Karmon, Anatte; Sheiner, Eyal
2008-06-01
Preeclampsia is a major cause of maternal morbidity, although its precise etiology remains elusive. A number of studies suggest that urinary tract infection (UTI) during the course of gestation is associated with elevated risk for preeclampsia, while others have failed to prove such an association. In our medical center, pregnant women who were exposed to at least one UTI episode during pregnancy were 1.3 times more likely to have mild preeclampsia and 1.8 times more likely to have severe preeclampsia as compared to unexposed women. Our results are based on univariate analyses and are not adjusted for potential confounders. This editorial aims to discuss the relationship between urinary tract infection and preeclampsia, as well as examine the current problems regarding the interpretation of this association. Although the relationship between UTI and preeclampsia has been demonstrated in studies with various designs, carried-out in a variety of settings, the nature of this association is unclear. By taking into account timeline, dose-response effects, treatment influences, and potential confounders, as well as by neutralizing potential biases, future studies may be able to clarify the relationship between UTI and preeclampsia by determining if it is causal, confounded, or spurious.
Sideband characterization and atmospheric observations with various 340 GHz heterodyne receivers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renker, Matthias, E-mail: renker@iap.unibe.ch; Murk, Axel; Rea, Simon P.
2014-08-15
This paper describes sideband response measurements and atmospheric observations with a double sideband and two Single Sideband (SSB) receiver prototypes developed for the multi-beam limb sounder instrument stratosphere-troposphere exchange and climate monitor radiometer. We first show an advanced Fourier-Transform Spectroscopy (FTS) method for sideband response and spurious signal characterization. We then present sideband response measurements of the different prototype receivers and we compare the results of the SSB receivers with sideband measurements by injecting a continuous wave signal into the upper and lower sidebands. The receivers were integrated into a total-power radiometer and atmospheric observations were carried out. The observedmore » spectra were compared to forward model spectra to conclude on the sideband characteristics of the different receivers. The two sideband characterization methods show a high degree of agreement for both SSB receivers with various local oscillator settings. The measured sideband response was used to correct the forward model simulations. This improves the agreement with the atmospheric observations and explains spectral features caused by an unbalanced sideband response. The FTS method also allows to quantify the influence of spurious harmonic responses of the receiver.« less
ADOLESCENT WORK INTENSITY, SCHOOL PERFORMANCE, AND ACADEMIC ENGAGEMENT*
Staff, Jeremy; Schulenberg, John E.; Bachman, Jerald G.
2010-01-01
Teenagers working over 20 hours per week perform worse in school than youth who work less. There are two competing explanations for this association: (1) that paid work takes time and effort away from activities that promote achievement, such as completing homework, preparing for examinations, getting help from parents and teachers, and participating in extracurricular activities; and (2) that the relationship between paid work and school performance is spurious, reflecting preexisting differences between students in academic ability, motivation, and school commitment. Using longitudinal data from the ongoing national Monitoring the Future project, this research examines the impact of teenage employment on school performance and academic engagement during the 8th, 10th, and 12th grades. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in paid work to changes in school performance and other school-related measures. Unlike prior research, we also compare youth school performance and academic orientation when they are actually working in high-intensity jobs to when they are jobless and wish to work intensively. Results indicate that the mere wish for intensive work corresponds with academic difficulties in a manner similar to actual intensive work. PMID:20802795
Explaining the Relationship between Employment and Juvenile Delinquency*
Staff, Jeremy; Osgood, D. Wayne; Schulenberg, John E.; Bachman, Jerald G.; Messersmith, Emily E.
2011-01-01
Most criminological theories predict an inverse relationship between employment and crime, but teenagers' involvement in paid work during the school year is positively correlated with delinquency and substance use. Whether the work-delinquency association is causal or spurious has long been debated. This study estimates the effect of paid work on juvenile delinquency using longitudinal data from the national Monitoring the Future project. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in juvenile delinquency and substance use to those in paid work and other explanatory variables. We also disentangle effects of actual employment from preferences for employment to provide insight about the likely role of time-varying selection factors tied to employment, delinquency, school engagement, and leisure activities. Whereas causal effects of employment would produce differences based on whether and how many hours respondents worked, we found significantly higher rates of crime and substance use among non-employed youth who preferred intensive versus moderate work. Our findings suggest the relationship between high-intensity work and delinquency results from preexisting factors that lead youth to desire varying levels of employment. PMID:21442045
Interplay of the Quality of Ciprofloxacin and Antibiotic Resistance in Developing Countries
Sharma, Deepali; Patel, Rahul P.; Zaidi, Syed Tabish R.; Sarker, Md. Moklesur Rahman; Lean, Qi Ying; Ming, Long C.
2017-01-01
Ciprofloxacin, a second generation broad spectrum fluoroquinolone, is active against both Gram-positive and Gram-negative bacteria. Ciprofloxacin has a high oral bioavailability and a large volume of distribution. It is used for the treatment of a wide range of infections including urinary tract infections caused by susceptible bacteria. However, the availability and use of substandard and spurious quality of oral ciprofloxacin formulations in the developing countries has been thought to have contributed toward increased risk of treatment failure and bacterial resistance. Therefore, quality control and bioequivalence studies of the commercially available oral ciprofloxacin formulations should be monitored. Appropriate actions should be taken against offending manufacturers in order to prevent the sale of substandard and spurious quality of ciprofloxacin formulations. PMID:28871228
Finite element procedures for time-dependent convection-diffusion-reaction systems
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Park, Y. J.; Deans, H. A.
1988-01-01
New finite element procedures based on the streamline-upwind/Petrov-Galerkin formulations are developed for time-dependent convection-diffusion-reaction equations. These procedures minimize spurious oscillations for convection-dominated and reaction-dominated problems. The results obtained for representative numerical examples are accurate with minimal oscillations. As a special application problem, the single-well chemical tracer test (a procedure for measuring oil remaining in a depleted field) is simulated numerically. The results show the importance of temperature effects on the interpreted value of residual oil saturation from such tests.
Blood transfusion-acquired hemoglobin C.
Suarez, A A; Polski, J M; Grossman, B J; Johnston, M F
1999-07-01
Unexpected and confusing laboratory test results can occur if a blood sample is inadvertently collected following a blood transfusion. A potential for transfusion-acquired hemoglobinopathy exists because heterozygous individuals show no significant abnormalities during the blood donor screening process. Such spurious results are infrequently reported in the medical literature. We report a case of hemoglobin C passively transferred during a red blood cell transfusion. The proper interpretation in our case was assisted by calculations comparing expected hemoglobin C concentration with the measured value. A review of the literature on transfusion-related preanalytic errors is provided.
Influence of satellite vibration on radio over IsOWC system
NASA Astrophysics Data System (ADS)
Zong, Kang; Zhu, Jiang
2017-07-01
In this paper, we analyze the influence of satellite vibration on radio over intersatellite optical wireless communication (IsOWC) system with an optical booster amplifier (OBA) and an optical preamplifier. The closed-form expressions of radio frequency (RF) gain, noise figure (NF) and spurious-free dynamic range (SFDR) are derived in the presence of pointing jitter taking consideration of bias error. Numerical results for RF gain, NF and SFDR are given for demonstration. Results indicate that the bias error obviously deteriorates the performance of the radio over IsOWC system.
An arc control and protection system for the JET lower hybrid antenna based on an imaging system.
Figueiredo, J; Mailloux, J; Kirov, K; Kinna, D; Stamp, M; Devaux, S; Arnoux, G; Edwards, J S; Stephen, A V; McCullen, P; Hogben, C
2014-11-01
Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguides facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.
SILCC-Zoom: the dynamic and chemical evolution of molecular clouds
NASA Astrophysics Data System (ADS)
Seifried, D.; Walch, S.; Girichidis, P.; Naab, T.; Wünsch, R.; Klessen, R. S.; Glover, S. C. O.; Peters, T.; Clark, P.
2017-12-01
We present 3D 'zoom-in' simulations of the formation of two molecular clouds out of the galactic interstellar medium. We model the clouds - identified from the SILCC simulations - with a resolution of up to 0.06 pc using adaptive mesh refinement in combination with a chemical network to follow heating, cooling and the formation of H2 and CO including (self-) shielding. The two clouds are assembled within a few million years with mass growth rates of up to ∼10-2 M⊙ yr-1 and final masses of ∼50 000 M⊙. A spatial resolution of ≲0.1 pc is required for convergence with respect to the mass, velocity dispersion and chemical abundances of the clouds, although these properties also depend on the cloud definition such as based on density thresholds, H2 or CO mass fraction. To avoid grid artefacts, the progressive increase of resolution has to occur within the free-fall time of the densest structures (1-1.5 Myr) and ≳200 time-steps should be spent on each refinement level before the resolution is progressively increased further. This avoids the formation of spurious, large-scale, rotating clumps from unresolved turbulent flows. While CO is a good tracer for the evolution of dense gas with number densities n ≥ 300 cm-3, H2 is also found for n ≲ 30 cm-3 due to turbulent mixing and becomes dominant at column densities around 30-50 M⊙ pc-2. The CO-to-H2 ratio steadily increases within the first 2 Myr, whereas XCO ≃ 1-4 × 1020 cm-2 (K km s-1)-1 is approximately constant since the CO(1-0) line quickly becomes optically thick.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caramana, E.J.; Shashkov, M.J.
1997-12-31
The bane of Lagrangian hydrodynamics calculations is premature breakdown of the grid topology that results in severe degradation of accuracy and run termination often long before the assumption of Lagrangian zonal mass ceased to be valid. At short spatial grid scales this is usually referred to by the terms hourglass mode or keystone motion associated in particular with underconstrained grids such as quadrilaterals and hexahedrons in two and three dimensions, respectively. At longer spatial scales relative to the grid spacing there is what is referred to ubiquitously as spurious vorticity, or the long-thin zone problem. In both cases the resultmore » is anomalous grid distortion and tangling that has nothing to do with the actual solution, as would be the case for turbulent flow. In this work the authors show how such motions can be eliminated by the proper use of subzonal Lagrangian masses, and associated densities and pressures. These subzonal masses arise in a natural way from the fact that they require the mass associated with the nodal grid point to be constant in time. This is addition to the usual assumption of constant, Lagrangian zonal mass in staggered grid hydrodynamics scheme. The authors show that with proper discretization of subzonal forces resulting from subzonal pressures, hourglass motion and spurious vorticity can be eliminated for a very large range of problems. Finally the authors are presenting results of calculations of many test problems.« less
An analytical model for the detection of levitated nanoparticles in optomechanics
NASA Astrophysics Data System (ADS)
Rahman, A. T. M. Anishur; Frangeskou, A. C.; Barker, P. F.; Morley, G. W.
2018-02-01
Interferometric position detection of levitated particles is crucial for the centre-of-mass (CM) motion cooling and manipulation of levitated particles. In combination with balanced detection and feedback cooling, this system has provided picometer scale position sensitivity, zeptonewton force detection, and sub-millikelvin CM temperatures. In this article, we develop an analytical model of this detection system and compare its performance with experimental results allowing us to explain the presence of spurious frequencies in the spectra.
Low Frequency Acoustic Detection Research in Support of Human Detection Range Prediction
1979-10-01
beat at narrow separations and hence made estimates of bandwidth difficult. In addition, Zwicker’s and Green’s data show large discrepancies, the...already known that this spurious low frequency noise can profoundly influence psychoacoustic results. For some years a binaural phenomenon known as the...tend to be uncorrelated in the two ears) and thus preserved the binaural advantage for the low frequency signals. Green et al. (Reference 21) used a
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.
Kong, Weijun; Jin, Cheng; Xiao, Xiaohe; Zhao, Yanling; Liu, Wei; Li, Zulun; Zhang, Ping
2010-06-01
A fast ultra-performance liquid chromatography-evaporative light scattering detection (UPLC-ELSD) method was established for simultaneous quantification of seven components in natural Calculus bovis (C. bovis) and its substitutes or spurious breeds. On a Waters Acquity UPLC BEH C(18) column, seven analytes were efficiently separated using 0.2% aqueous formic acid-acetonitrile as the mobile phase in a gradient program. The evaporator tube temperature of ELSD was set at 100 degrees C with the nebulizing gas flow-rate of 1.9 L/min. The results showed that this established UPLC-ELSD method was validated to be sensitive, precise and accurate with the LODs of seven analytes at 2-11 ng, and the overall intra-day and inter-day variations less than 3.0%. The recovery of the method was in the range of 97.8-101.6%, with RSD less than 3.0%. Further results of PCA on the contents of seven investigated analytes suggested that compounds of cholic acid, deoxycholic acid and chenodeoxycholic acid or cholesterol should be added as chemical markers to UPLC analysis of C. bovis samples for quality control and to discriminate natural C. bovis sample and its substitutes or some spurious breeds, then normalize the use of natural C. bovis and ensure its clinical efficacy.
Lattice Boltzmann methods for global linear instability analysis
NASA Astrophysics Data System (ADS)
Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis
2017-12-01
Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.
Low-resolution simulations of vesicle suspensions in 2D
NASA Astrophysics Data System (ADS)
Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George
2018-03-01
Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.
Chronobiology of Aging: A Mini-Review.
Cornelissen, Germaine; Otsuka, Kuniaki
2017-01-01
Aging is generally associated with weakening of the circadian system. The circadian amplitude is reduced and the circadian acrophase becomes more labile, tending to occur earlier with advancing age. As originally noted by Franz Halberg, similar features are observed in the experimental laboratory after bilateral lesioning of the suprachiasmatic nuclei, suggesting the involvement of clock genes in the aging process as they are in various disease conditions. Recent work has been shedding light on underlying pathways involved in the aging process, with the promise of interventions to extend healthy life spans. Caloric restriction, which is consistently and reproducibly associated with prolonging life in different animal models, is associated with an increased circadian amplitude. These results indicate the critical importance of chronobiology in dealing with problems of aging, from the circadian clock machinery orchestrating metabolism to the development of geroprotectors. The quantitative estimation of circadian rhythm characteristics interpreted in the light of time-specified reference values helps (1) to distinguish effects of natural healthy aging from those associated with disease and predisease; (2) to detect alterations in rhythm characteristics as markers of increased risk before there is overt disease; and (3) to individually optimize by timing prophylactic and/or therapeutic interventions aimed at restoring a disturbed circadian system and/or enhancing a healthy life span. Mapping changes in amplitude and/or acrophase that may overshadow any change in average value also avoids drawing spurious conclusions resulting from data collected at a fixed clock hour. Timely risk detection combined with treatment optimization by timing (chronotherapy) is the goal of several ongoing comprehensive community-based studies focusing on the well-being of the elderly, so that longevity is not achieved at the cost of a reduced quality of life. © 2016 S. Karger AG, Basel.
Deconvolving molecular signatures of interactions between microbial colonies
Harn, Y.-C.; Powers, M. J.; Shank, E. A.; Jojic, V.
2015-01-01
Motivation: The interactions between microbial colonies through chemical signaling are not well understood. A microbial colony can use different molecules to inhibit or accelerate the growth of other colonies. A better understanding of the molecules involved in these interactions could lead to advancements in health and medicine. Imaging mass spectrometry (IMS) applied to co-cultured microbial communities aims to capture the spatial characteristics of the colonies’ molecular fingerprints. These data are high-dimensional and require computational analysis methods to interpret. Results: Here, we present a dictionary learning method that deconvolves spectra of different molecules from IMS data. We call this method MOLecular Dictionary Learning (MOLDL). Unlike standard dictionary learning methods which assume Gaussian-distributed data, our method uses the Poisson distribution to capture the count nature of the mass spectrometry data. Also, our method incorporates universally applicable information on common ion types of molecules in MALDI mass spectrometry. This greatly reduces model parameterization and increases deconvolution accuracy by eliminating spurious solutions. Moreover, our method leverages the spatial nature of IMS data by assuming that nearby locations share similar abundances, thus avoiding overfitting to noise. Tests on simulated datasets show that this method has good performance in recovering molecule dictionaries. We also tested our method on real data measured on a microbial community composed of two species. We confirmed through follow-up validation experiments that our method recovered true and complete signatures of molecules. These results indicate that our method can discover molecules in IMS data reliably, and hence can help advance the study of interaction of microbial colonies. Availability and implementation: The code used in this paper is available at: https://github.com/frizfealer/IMS_project. Contact: vjojic@cs.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072476
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Influence of Boussinesq coefficient on depth-averaged modelling of rapid flows
NASA Astrophysics Data System (ADS)
Yang, Fan; Liang, Dongfang; Xiao, Yang
2018-04-01
The traditional Alternating Direction Implicit (ADI) scheme has been proven to be incapable of modelling trans-critical flows. Its inherent lack of shock-capturing capability often results in spurious oscillations and computational instabilities. However, the ADI scheme is still widely adopted in flood modelling software, and various special treatments have been designed to stabilise the computation. Modification of the Boussinesq coefficient to adjust the amount of fluid inertia is a numerical treatment that allows the ADI scheme to be applicable to rapid flows. This study comprehensively examines the impact of this numerical treatment over a range of flow conditions. A shock-capturing TVD-MacCormack model is used to provide reference results. For unsteady flows over a frictionless bed, such as idealised dam-break floods, the results suggest that an increase in the value of the Boussinesq coefficient reduces the amplitude of the spurious oscillations. The opposite is observed for steady rapid flows over a frictional bed. Finally, a two-dimensional urban flooding phenomenon is presented, involving unsteady flow over a frictional bed. The results show that increasing the value of the Boussinesq coefficient can significantly reduce the numerical oscillations and reduce the predicted area of inundation. In order to stabilise the ADI computations, the Boussinesq coefficient could be judiciously raised or lowered depending on whether the rapid flow is steady or unsteady and whether the bed is frictional or frictionless. An increase in the Boussinesq coefficient generally leads to overprediction of the propagating speed of the flood wave over a frictionless bed, but the opposite is true when bed friction is significant.
Redox History of Early Solar System Planetismals Recorded in the D;Orbigny Angrite
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, P.L.; Sutton, S.R.; Spilde, M.N.
2012-04-02
Angrites are ancient basaltic meteorites ({approx}4.56 Ga) that preserve evidence of some of the solar system's earliest melting events. The volcanic-textured angrites such as D'Orbigny were rapidly crystallized and are relatively pristine; lacking shock, brecciation, and parent-body weathering textures. Thus, these angrites provide a unique 'window' into the petrogenesis of planetary bodies in the early solar system. Angrites may be formed by partial melting of CV chondrites under relatively oxidized sources compared to the eucrites, and therefore may document variations in fO{sub 2} conditions on carbonaceous chondrite parent bodies. Thus, understanding the intrinsic fO{sub 2} conditions of the angrites ismore » needed to determine how different early Solar System basalts form, to model separation of the core, mantle and crust, and to understand magnetic fields on planetary bodies. The D'Orbigny angrite contains a range of textures: (a) crystalline texture containing interlocking crystals of fassaite (pyroxene) with Ti-rich rims, anorthite, and Mg-olivine with Fe-rich rims; (b) cavities with protruding needle-like pyroxene and anorthite dusted by Ca-(Mg)-carbonate; (c) mesostasis with kirschsteinite, ilmenite, troilite, phosphates (e.g., merrilite, whitlockite and Casilicophosphate), rhonite and minor glass; (d) glasses ({approx} angrite composition) in vesicles, as inclusions and as beads, and also cross-cutting crystal-rich portions of the rock; (e) vesicles (e.g., {approx}1.4 vol. %, 0.0219-87.7 mm{sup 3}). Analysis of the textures and Fe{sup 3+}/Fetotal of the cavity pyroxene suggests that the oxygen fugacity (fO{sub 2}) increased in the D'Orbigny angrite perhaps due to introduction of a gas phase. Here we examine the detailed fO{sub 2} history using micro-analyses that allow us to avoid inclusions that may cause spurious results. We present analyses of both S- and V- oxidation states to complement other work using Fe-oxidation state and to avoid problems related to measuring low concentrations of Fe{sup 3+} and propagating errors when calculating fO{sub 2} in samples with low Fe{sup 3+} concentrations.« less
On the effect of using the Shapiro filter to smooth winds on a sphere
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Balgovind, R. C.
1984-01-01
Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.
The numerical dynamic for highly nonlinear partial differential equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.
Analysis of the transient behavior of rubbing components
NASA Technical Reports Server (NTRS)
Quezdou, M. B.; Mullen, R. L.
1986-01-01
Finite element equations are developed for studying deformations and temperatures resulting from frictional heating in sliding system. The formulation is done for linear steady state motion in two dimensions. The equations include the effect of the velocity on the moving components. This gives spurious oscillations in their solutions by Galerkin finite element methods. A method called streamline upwind scheme is used to try to deal with this deficiency. The finite element program is then used to investigate the friction of heating in gas path seal.
High dynamic range electric field sensor for electromagnetic pulse detection.
Lin, Che-Yun; Wang, Alan X; Lee, Beom Suk; Zhang, Xingyu; Chen, Ray T
2011-08-29
We design a high dynamic range electric field sensor based on domain inverted electro-optic (E-O) polymer Y-fed directional coupler for electromagnetic wave detection. This electrode-less, all optical, wideband electrical field sensor is fabricated using standard processing for E-O polymer photonic devices. Experimental results demonstrate effective detection of electric field from 16.7V/m to 750KV/m at a frequency of 1GHz, and spurious free measurement range of 70dB.
Maximum a posteriori decoder for digital communications
NASA Technical Reports Server (NTRS)
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Flexural waves induced by electro-impulse deicing forces
NASA Technical Reports Server (NTRS)
Gien, P. H.
1990-01-01
The generation, reflection and propagation of flexural waves created by electroimpulsive deicing forces are demonstrated both experimentally and analytically in a thin circular plate and a thin semicylindrical shell. Analytical prediction of these waves with finite element models shows good correlation with acceleration and displacement measurements at discrete points on the structures studied. However, sensitivity to spurious flexural waves resulting from the spatial discretization of the structures is shown to be significant. Consideration is also given to composite structures as an extension of these studies.
Second ROSAT all-sky survey (2RXS) source catalogue
NASA Astrophysics Data System (ADS)
Boller, Th.; Freyberg, M. J.; Trümper, J.; Haberl, F.; Voges, W.; Nandra, K.
2016-04-01
Aims: We present the second ROSAT all-sky survey source catalogue, hereafter referred to as the 2RXS catalogue. This is the second publicly released ROSAT catalogue of point-like sources obtained from the ROSAT all-sky survey (RASS) observations performed with the position-sensitive proportional counter (PSPC) between June 1990 and August 1991, and is an extended and revised version of the bright and faint source catalogues. Methods: We used the latest version of the RASS processing to produce overlapping X-ray images of 6.4° × 6.4° sky regions. To create a source catalogue, a likelihood-based detection algorithm was applied to these, which accounts for the variable point-spread function (PSF) across the PSPC field of view. Improvements in the background determination compared to 1RXS were also implemented. X-ray control images showing the source and background extraction regions were generated, which were visually inspected. Simulations were performed to assess the spurious source content of the 2RXS catalogue. X-ray spectra and light curves were extracted for the 2RXS sources, with spectral and variability parameters derived from these products. Results: We obtained about 135 000 X-ray detections in the 0.1-2.4 keV energy band down to a likelihood threshold of 6.5, as adopted in the 1RXS faint source catalogue. Our simulations show that the expected spurious content of the catalogue is a strong function of detection likelihood, and the full catalogue is expected to contain about 30% spurious detections. A more conservative likelihood threshold of 9, on the other hand, yields about 71 000 detections with a 5% spurious fraction. We recommend thresholds appropriate to the scientific application. X-ray images and overlaid X-ray contour lines provide an additional user product to evaluate the detections visually, and we performed our own visual inspections to flag uncertain detections. Intra-day variability in the X-ray light curves was quantified based on the normalised excess variance and a maximum amplitude variability analysis. X-ray spectral fits were performed using three basic models, a power law, a thermal plasma emission model, and black-body emission. Thirty-two large extended regions with diffuse emission and embedded point sources were identified and excluded from the present analysis. Conclusions: The 2RXS catalogue provides the deepest and cleanest X-ray all-sky survey catalogue in advance of eROSITA. The catalogue is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A103
Century-scale simulations of the response of the West Antarctic Ice Sheet to a warming climate
Cornford, S. L.; Martin, D. F.; Payne, A. J.; ...
2015-03-23
We use the BISICLES adaptive mesh ice sheet model to carry out one, two, and three century simulations of the fast-flowing ice streams of the West Antarctic Ice Sheet. Each of the simulations begins with a geometry and velocity close to present day observations, and evolves according to variation in meteoric ice accumulation, ice shelf melting, and mesh resolution. Future changes in accumulation and melt rates range from no change, through anomalies computed by atmosphere and ocean models driven by the E1 and A1B emissions scenarios, to spatially uniform melt rates anomalies that remove most of the ice shelves overmore » a few centuries. We find that variation in the resulting ice dynamics is dominated by the choice of initial conditions, ice shelf melt rate and mesh resolution, although ice accumulation affects the net change in volume above flotation to a similar degree. Given sufficient melt rates, we compute grounding line retreat over hundreds of kilometers in every major ice stream, but the ocean models do not predict such melt rates outside of the Amundsen Sea Embayment until after 2100. Sensitivity to mesh resolution is spurious, and we find that sub-kilometer resolution is needed along most regions of the grounding line to avoid systematic under-estimates of the retreat rate, although resolution requirements are more stringent in some regions – for example the Amundsen Sea Embayment – than others – such as the Möller and Institute ice streams.« less
Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F
2010-07-01
Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KL Gaustad; DD Turner
2007-09-30
This report provides a short description of the Atmospheric Radiation Measurement (ARM) microwave radiometer (MWR) RETrievel (MWRRET) Value-Added Product (VAP) algorithm. This algorithm utilizes complimentary physical and statistical retrieval methods and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
Tandem mass spectrometry data quality assessment by self-convolution.
Choo, Keng Wah; Tham, Wai Mun
2007-09-20
Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that the algorithm performs well and could potentially be used as a pre-processing for all mass spectrometry based protein identification tools.
Rocha-Martins, Maurício; Njaine, Brian; Silveira, Mariana S
2012-01-01
Housekeeping genes have been commonly used as reference to normalize gene expression and protein content data because of its presumed constitutive expression. In this paper, we challenge the consensual idea that housekeeping genes are reliable controls for expression studies in the retina through the investigation of a panel of reference genes potentially suitable for analysis of different stages of retinal development. We applied statistical tools on combinations of retinal developmental stages to assess the most stable internal controls for quantitative RT-PCR (qRT-PCR). The stability of expression of seven putative reference genes (Actb, B2m, Gapdh, Hprt1, Mapk1, Ppia and Rn18s) was analyzed using geNorm, BestKeeper and Normfinder software. In addition, several housekeeping genes were tested as loading controls for Western blot in the same sample panel, using Image J. Overall, for qRT-PCR the combination of Gapdh and Mapk1 showed the highest stability for most experimental sets. Actb was downregulated in more mature stages, while Rn18s and Hprt1 showed the highest variability. We normalized the expression of cyclin D1 using various reference genes and demonstrated that spurious results may result from blind selection of internal controls. For Western blot significant variation could be seen among four putative internal controls (β-actin, cyclophilin b, α-tubulin and lamin A/C), while MAPK1 was stably expressed. Putative housekeeping genes exhibit significant variation in both mRNA and protein content during retinal development. Our results showed that distinct combinations of internal controls fit for each experimental set in the case of qRT-PCR and that MAPK1 is a reliable loading control for Western blot. The results indicate that biased study outcomes may follow the use of reference genes without prior validation for qRT-PCR and Western blot.
Simulations of the stratocumulus-topped boundary layer with a third-order closure model
NASA Technical Reports Server (NTRS)
Moeng, C. H.; Randall, D. A.
1984-01-01
A third order closure model is proposed by Andre et al. (1982), in which the time rate of change terms, the relaxation and rapid effects for the pressure related terms, and the clipping approximation are included along with the quasi-normal closure, to study turbulence in a cloudy layer which is cooled radiatively from above. A spurious oscillation which is strongest near the inversion occurs. An analysis of the problem shows that the oscillation arises from the mean gradient and buoyancy terms of the triple moment equations; these terms are largest near the cloud top. The oscillation is physical, rather than computational. In nature the oscillation is effectively damped, by a mechanism which apparently is not included in our model. In the stably stratified layer just above the mixed layer top, turbulence can excite gravity waves, whose energy is radiated away. Because the closure assumption for the pressure terms does not take into account the transport of wave energy, the model generates spurious oscillations. Damping of the oscillations is possible by introducing diffusion terms into the triple moment equations. With a large enough choice for the diffusion coefficient, the oscillation is effectively eliminated. The results are quite sensitive to the ad hoc eddy coefficient.
Ill-posedness in modeling mixed sediment river morphodynamics
NASA Astrophysics Data System (ADS)
Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid
2018-04-01
In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.
Network reconstruction via graph blending
NASA Astrophysics Data System (ADS)
Estrada, Rolando
2016-05-01
Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.
Problems and programming for analysis of IUE high resolution data for variability
NASA Technical Reports Server (NTRS)
Grady, C. A.
1981-01-01
Observations of variability in stellar winds provide an important probe of their dynamics. It is crucial however to know that any variability seen in a data set can be clearly attributed to the star and not to instrumental or data processing effects. In the course of analysis of IUE high resolution data of alpha Cam and other O, B and Wolf-Rayet stars several effects were found which cause spurious variability or spurious spectral features in our data. Programming was developed to partially compensate for these effects using the Interactive Data language (IDL) on the LASP PDP 11/34. Use of an interactive language such as IDL is particularly suited to analysis of variability data as it permits use of efficient programs coupled with the judgement of the scientist at each stage of processing.
The rate of cis-trans conformation errors is increasing in low-resolution crystal structures.
Croll, Tristan Ian
2015-03-01
Cis-peptide bonds (with the exception of X-Pro) are exceedingly rare in native protein structures, yet a check for these is not currently included in the standard workflow for some common crystallography packages nor in the automated quality checks that are applied during submission to the Protein Data Bank. This appears to be leading to a growing rate of inclusion of spurious cis-peptide bonds in low-resolution structures both in absolute terms and as a fraction of solved residues. Most concerningly, it is possible for structures to contain very large numbers (>1%) of spurious cis-peptide bonds while still achieving excellent quality reports from MolProbity, leading to concerns that ignoring such errors is allowing software to overfit maps without producing telltale errors in, for example, the Ramachandran plot.
Martingales, nonstationary increments, and the efficient market hypothesis
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.
2008-06-01
We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.
NASA Astrophysics Data System (ADS)
Lu, Xinhua; Mao, Bing; Dong, Bingjiang
2018-01-01
Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.
Low temperature electrical properties of some Pb-free solders
NASA Astrophysics Data System (ADS)
Kisiel, Ryszard; Pekala, Marek
2006-03-01
The electronic industry is engaged in developing Pb-free technologies for more than ten years. However till now not all properties of new solders are described. The aim of the paper is to present some electrical properties of new series of Pb-free solders (eutectic SnAg, near eutectic SnAgCu with and without Bi) in low temperature ranges 10 K to 273K. The following parameters were analyzed: electrical resistivity, temperature coefficient of resistance and thermoelectric power. The electrical resistivity at temperatures above 50 K is a monotonically rising function of temperature for Pb-free solders studied. The electrical resistivity of the Bi containing alloys is higher as compared to the remaining ones. The thermoelectric power values at room temperature are about -8 μV/K to -6 μV/K for Pb-free solders studied, being higher as compared to typical values -3 μVK of SnPb solder. The relatively low absolute values as well as the smooth and weak temperature variation of electrical resistivity in lead free solders enable the possible low temperature application. The moderate values of thermoelectric power around and above the room temperature show that when applying the solders studied the temperature should be kept as uniform as possible, in order to avoid spurious or noise voltages.
NASA Astrophysics Data System (ADS)
Sozzi, B.; Olivieri, M.; Mariani, P.; Giunti, C.; Zatti, S.; Porta, A.
2014-05-01
Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don't explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.
Genovar: a detection and visualization tool for genomic variants.
Jung, Kwang Su; Moon, Sanghoon; Kim, Young Jin; Kim, Bong-Jo; Park, Kiejung
2012-05-08
Along with single nucleotide polymorphisms (SNPs), copy number variation (CNV) is considered an important source of genetic variation associated with disease susceptibility. Despite the importance of CNV, the tools currently available for its analysis often produce false positive results due to limitations such as low resolution of array platforms, platform specificity, and the type of CNV. To resolve this problem, spurious signals must be separated from true signals by visual inspection. None of the previously reported CNV analysis tools support this function and the simultaneous visualization of comparative genomic hybridization arrays (aCGH) and sequence alignment. The purpose of the present study was to develop a useful program for the efficient detection and visualization of CNV regions that enables the manual exclusion of erroneous signals. A JAVA-based stand-alone program called Genovar was developed. To ascertain whether a detected CNV region is a novel variant, Genovar compares the detected CNV regions with previously reported CNV regions using the Database of Genomic Variants (DGV, http://projects.tcag.ca/variation) and the Single Nucleotide Polymorphism Database (dbSNP). The current version of Genovar is capable of visualizing genomic data from sources such as the aCGH data file and sequence alignment format files. Genovar is freely accessible and provides a user-friendly graphic user interface (GUI) to facilitate the detection of CNV regions. The program also provides comprehensive information to help in the elimination of spurious signals by visual inspection, making Genovar a valuable tool for reducing false positive CNV results. http://genovar.sourceforge.net/.
Buhule, Olive D.; Minster, Ryan L.; Hawley, Nicola L.; Medvedovic, Mario; Sun, Guangyun; Viali, Satupaitea; Deka, Ranjan; McGarvey, Stephen T.; Weeks, Daniel E.
2014-01-01
Background: Batch effects in DNA methylation microarray experiments can lead to spurious results if not properly handled during the plating of samples. Methods: Two pilot studies examining the association of DNA methylation patterns across the genome with obesity in Samoan men were investigated for chip- and row-specific batch effects. For each study, the DNA of 46 obese men and 46 lean men were assayed using Illumina's Infinium HumanMethylation450 BeadChip. In the first study (Sample One), samples from obese and lean subjects were examined on separate chips. In the second study (Sample Two), the samples were balanced on the chips by lean/obese status, age group, and census region. We used methylumi, watermelon, and limma R packages, as well as ComBat, to analyze the data. Principal component analysis and linear regression were, respectively, employed to identify the top principal components and to test for their association with the batches and lean/obese status. To identify differentially methylated positions (DMPs) between obese and lean males at each locus, we used a moderated t-test. Results: Chip effects were effectively removed from Sample Two but not Sample One. In addition, dramatic differences were observed between the two sets of DMP results. After “removing” batch effects with ComBat, Sample One had 94,191 probes differentially methylated at a q-value threshold of 0.05 while Sample Two had zero differentially methylated probes. The disparate results from Sample One and Sample Two likely arise due to the confounding of lean/obese status with chip and row batch effects. Conclusion: Even the best possible statistical adjustments for batch effects may not completely remove them. Proper study design is vital for guarding against spurious findings due to such effects. PMID:25352862
Marquis, Raymond; Biedermann, Alex; Cadola, Liv; Champod, Christophe; Gueissaz, Line; Massonnet, Geneviève; Mazzella, Williams David; Taroni, Franco; Hicks, Tacha
2016-09-01
In a recently published guideline for evaluative reporting in forensic science, the European Network of Forensic Science Institutes (ENFSI) recommended the use of the likelihood ratio for the measurement of the value of forensic results. As a device to communicate the probative value of the results, the ENFSI guideline mentions the possibility to define and use a verbal scale, which should be unified within a forensic institution. This paper summarizes discussions held between scientists of our institution to develop and implement such a verbal scale. It intends to contribute to general discussions likely to be faced by any forensic institution that engages in continuous monitoring and improving of their evaluation and reporting format. We first present published arguments in favour of the use of such verbal qualifiers. We emphasise that verbal qualifiers do not replace the use of numbers to evaluate forensic findings, but are useful to communicate the probative value, since the weight of evidence in terms of likelihood ratio are still apprehended with difficulty by both the forensic scientists, especially in the absence of hard data, and the recipient of information. We further present arguments that support the development of the verbal scale that we propose. Recognising the limits of the use of such a verbal scale, we then discuss its disadvantages: it may lead to the spurious view according to which the value of the observations made in a given case is relative to other cases. Verbal qualifiers are also prone to misunderstandings and cannot be coherently combined with other evidence. We therefore recommend not using the verbal qualifier alone in a written statement. While scientists should only report on the probability of the findings - and not on the probability of the propositions, which are the duty of the Court - we suggest showing examples to let the recipient of information understand how the scientific evidence affects the probabilities of the propositions. To avoid misunderstandings, we also advise to mention in the statement what the results do not mean. Finally, we are of the opinion that if experts were able to coherently articulate numbers, and if recipients of information could properly handle such numbers, then verbal qualifiers could be abandoned completely. At that time, numerical expressions of probative value will be appropriately understood, as other numerical measures that most of us understand without the need of any further explanation, such as expressions for length or temperature. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Solutions of the benchmark problems by the dispersion-relation-preserving scheme
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Shen, H.; Kurbatskii, K. A.; Auriault, L.
1995-01-01
The 7-point stencil Dispersion-Relation-Preserving scheme of Tam and Webb is used to solve all the six categories of the CAA benchmark problems. The purpose is to show that the scheme is capable of solving linear, as well as nonlinear aeroacoustics problems accurately. Nonlinearities, inevitably, lead to the generation of spurious short wave length numerical waves. Often, these spurious waves would overwhelm the entire numerical solution. In this work, the spurious waves are removed by the addition of artificial selective damping terms to the discretized equations. Category 3 problems are for testing radiation and outflow boundary conditions. In solving these problems, the radiation and outflow boundary conditions of Tam and Webb are used. These conditions are derived from the asymptotic solutions of the linearized Euler equations. Category 4 problems involved solid walls. Here, the wall boundary conditions for high-order schemes of Tam and Dong are employed. These conditions require the use of one ghost value per boundary point per physical boundary condition. In the second problem of this category, the governing equations, when written in cylindrical coordinates, are singular along the axis of the radial coordinate. The proper boundary conditions at the axis are derived by applying the limiting process of r approaches 0 to the governing equations. The Category 5 problem deals with the numerical noise issue. In the present approach, the time-independent mean flow solution is computed first. Once the residual drops to the machine noise level, the incident sound wave is turned on gradually. The solution is marched in time until a time-periodic state is reached. No exact solution is known for the Category 6 problem. Because of this, the problem is formulated in two totally different ways, first as a scattering problem then as a direct simulation problem. There is good agreement between the two numerical solutions. This offers confidence in the computed results. Both formulations are solved as initial value problems. As such, no Kutta condition is required at the trailing edge of the airfoil.
Pitfalls in setting up genetic studies on preeclampsia.
Laivuori, Hannele
2013-04-01
This presentation will consider approaches to discover susceptibility genes for a complex genetic disorder such as preeclampsia. The clinical disease presumably results from the additive effects of multiple sequence variants from the mother and the foetus together with environmental factors. Disease heterogeneity and underpowered study designs are likely to be behind non-reproducible results in candidate gene association studies. To avoid spurious findings, sample size and characteristics of the study populations as well as replication studies in an independent study population should be an essential part of a study design. In family-based linkage studies relationship with genotype and phenotype may be modified by a variety of factors. The large number of families needed in discovering genetic variants with modest effect sizes is difficult to attain. Moreover, the identification of underlying mutations has proven difficult. When pooling data or performing meta-analyses from different populations, disease and locus heterogeneity may become a major issue. First genome-wide association studies (GWAS) have identified risk loci for preeclampsia. Adequately powered replication studies are critical in order to replicate the initial GWAS findings. This approach requires rigorous multiple testing correction. The expected effect sizes of individual sequence variants on preeclampsia are small, but this approach is likely to decipher new clues to the pathogenesis. The rare variants, gene-gene and gene-environmental interactions as well as noncoding genetic variations and epigenetics are expected to explain the missing heritability. Next-generation sequencing technologies will make large amount of data on genomes and transcriptomes available. Complexity of the data poses a challenge. Different depths of coverage might be chosen depending on the design of the study, and validation of the results by different methods is mandatory. In order to minimize disease heterogeneity in genetic studies of preeclampsia, identification of subtypes and intermediate phenotypes would be highly desirable. Copyright © 2013. Published by Elsevier B.V.
Seleson, Pablo; Du, Qiang; Parks, Michael L.
2016-08-16
The peridynamic theory of solid mechanics is a nonlocal reformulation of the classical continuum mechanics theory. At the continuum level, it has been demonstrated that classical (local) elasticity is a special case of peridynamics. Such a connection between these theories has not been extensively explored at the discrete level. This paper investigates the consistency between nearest-neighbor discretizations of linear elastic peridynamic models and finite difference discretizations of the Navier–Cauchy equation of classical elasticity. While nearest-neighbor discretizations in peridynamics have been numerically observed to present grid-dependent crack paths or spurious microcracks, this paper focuses on a different, analytical aspect of suchmore » discretizations. We demonstrate that, even in the absence of cracks, such discretizations may be problematic unless a proper selection of weights is used. Specifically, we demonstrate that using the standard meshfree approach in peridynamics, nearest-neighbor discretizations do not reduce, in general, to discretizations of corresponding classical models. We study nodal-based quadratures for the discretization of peridynamic models, and we derive quadrature weights that result in consistency between nearest-neighbor discretizations of peridynamic models and discretized classical models. The quadrature weights that lead to such consistency are, however, model-/discretization-dependent. We motivate the choice of those quadrature weights through a quadratic approximation of displacement fields. The stability of nearest-neighbor peridynamic schemes is demonstrated through a Fourier mode analysis. Finally, an approach based on a normalization of peridynamic constitutive constants at the discrete level is explored. This approach results in the desired consistency for one-dimensional models, but does not work in higher dimensions. The results of the work presented in this paper suggest that even though nearest-neighbor discretizations should be avoided in peridynamic simulations involving cracks, such discretizations are viable, for example for verification or validation purposes, in problems characterized by smooth deformations. Furthermore, we demonstrate that better quadrature rules in peridynamics can be obtained based on the functional form of solutions.« less
Quantum nuclear pasta and nuclear symmetry energy
NASA Astrophysics Data System (ADS)
Fattoyev, F. J.; Horowitz, C. J.; Schuetrumpf, B.
2017-05-01
Complex and exotic nuclear geometries, collectively referred to as "nuclear pasta," are expected to appear naturally in dense nuclear matter found in the crusts of neutron stars and supernovae environments. The pasta geometries depend on the average baryon density, proton fraction, and temperature and are critically important in the determination of many transport properties of matter in supernovae and the crusts of neutron stars. Using a set of self-consistent microscopic nuclear energy density functionals, we present the first results of large scale quantum simulations of pasta phases at baryon densities 0.03 ≤ρ ≤0.10 fm-3 , proton fractions 0.05 ≤Yp≤0.40 , and zero temperature. The full quantum simulations, in particular, allow us to thoroughly investigate the role and impact of the nuclear symmetry energy on pasta configurations. We use the Sky3D code that solves the Skyrme Hartree-Fock equations on a three-dimensional Cartesian grid. For the nuclear interaction we use the state-of-the-art UNEDF1 parametrization, which was introduced to study largely deformed nuclei, hence is suitable for studies of the nuclear pasta. Density dependence of the nuclear symmetry energy is simulated by tuning two purely isovector observables that are insensitive to the current available experimental data. We find that a minimum total number of nucleons A =2000 is necessary to prevent the results from containing spurious shell effects and to minimize finite size effects. We find that a variety of nuclear pasta geometries are present in the neutron star crust, and the result strongly depends on the nuclear symmetry energy. The impact of the nuclear symmetry energy is less pronounced as the proton fractions increase. Quantum nuclear pasta calculations at T =0 MeV are shown to get easily trapped in metastable states, and possible remedies to avoid metastable solutions are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, Lei, E-mail: bianlei@pku.edu.cn; Pang, Gang, E-mail: 1517191281@qq.com; Tang, Shaoqiang, E-mail: maotang@pku.edu.cn
For the Schrödinger–Poisson system, we propose an ALmost EXact (ALEX) boundary condition to treat accurately the numerical boundaries. Being local in both space and time, the ALEX boundary conditions are demonstrated to be effective in suppressing spurious numerical reflections. Together with the Crank–Nicolson scheme, we simulate a resonant tunneling diode. The algorithm produces numerical results in excellent agreement with those in Mennemann et al. [1], yet at a much reduced complexity. Primary peaks in wave function profile appear as a consequence of quantum resonance, and should be considered in selecting the cut-off wave number for numerical simulations.
Time-domain model of gyroklystrons with diffraction power input and output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginzburg, N. S., E-mail: ginzburg@appl.sci-nnov.ru; Rozental, R. M.; Sergeev, A. S.
A time-domain theory of gyroklystrons with diffraction input and output has been developed. The theory is based on the description of the wave excitation and propagation by a parabolic equation. The results of the simulations are in good agreement with the experimental studies of two-cavity gyroklystrons operating at the first and second cyclotron harmonics. Along with the basic characteristics of the amplification regimes, such as the gain and efficiency, the developed method makes it possible to define the conditions of spurious self-excitation and frequency-locking by an external signal.
Forms of concern: toward an intersubjective perspective.
Tolmacz, Rami
2013-09-01
The growing interest in the issue of concern, which appeared relatively late in psychoanalytical literature, resulted in several distinctions. Winnicott distinguished between concern as an expression of guilt and concern as a manifestation of joy, Brenman Pick distinguished between real concern and spurious concern, and Bowlby distinguished between sensitive and compulsive caregiving. The basic concepts of Buber's dialogical philosophy and intersubjective approaches in psychoanalysis have created fertile ground for the study of concern, and enabled us to conceptualize these distinctions in a way that has heretofore been lacking in psychoanalytical thought.
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1995-01-01
Underintegrated methods are investigated with respect to their stability and convergence properties. The focus was on identifying regions where they work and regions where techniques such as hourglass viscosity and hourglass control can be used. Results obtained show that underintegrated methods typically lead to finite element stiffness with spurious modes in the solution. However, problems exist (scalar elliptic boundary value problems) where underintegrated with hourglass control yield convergent solutions. Also, stress averaging in underintegrated stiffness calculations does not necessarily lead to stable or convergent stress states.
A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing
NASA Astrophysics Data System (ADS)
Chernogorova, Tatiana; Vulkov, Lubin
2009-11-01
This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.
NASA Astrophysics Data System (ADS)
Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.
2018-02-01
We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.
Computations of spray, fuel-air mixing, and combustion in a lean-premixed-prevaporized combustor
NASA Technical Reports Server (NTRS)
Dasgupta, A.; Li, Z.; Shih, T. I.-P.; Kundu, K.; Deur, J. M.
1993-01-01
A code was developed for computing the multidimensional flow, spray, combustion, and pollutant formation inside gas turbine combustors. The code developed is based on a Lagrangian-Eulerian formulation and utilizes an implicit finite-volume method. The focus of this paper is on the spray part of the code (both formulation and algorithm), and a number of issues related to the computation of sprays and fuel-air mixing in a lean-premixed-prevaporized combustor. The issues addressed include: (1) how grid spacings affect the diffusion of evaporated fuel, and (2) how spurious modes can arise through modelling of the spray in the Lagrangian computations. An upwind interpolation scheme is proposed to account for some effects of grid spacing on the artificial diffusion of the evaporated fuel. Also, some guidelines are presented to minimize errors associated with the spurious modes.
Robust Statistical Detection of Power-Law Cross-Correlation.
Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert
2016-06-02
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.
Fleyer, Michael; Sherman, Alexander; Horowitz, Moshe; Namer, Moshe
2016-05-01
We experimentally demonstrate a wideband-frequency tunable optoelectronic oscillator (OEO) based on injection locking of the OEO to a tunable electronic oscillator. The OEO cavity does not contain a narrowband filter and its frequency can be tuned over a broad bandwidth of 1 GHz. The injection locking is based on minimizing the injected power by adjusting the frequency of one of the OEO cavity modes to be approximately equal to the frequency of the injected signal. The phase noise that is obtained in the injection-locked OEO is similar to that obtained in a long-cavity self-sustained OEO. Although the cavity length of the OEO was long, the spurious modes were suppressed due to the injection locking without the need to use a narrowband filter. The spurious level was significantly below that obtained in a self-sustained OEO after inserting a narrowband electronic filter with a Q-factor of 720 into the cavity.
Robust Statistical Detection of Power-Law Cross-Correlation
Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert
2016-01-01
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630
Bulk Genotyping of Biopsies Can Create Spurious Evidence for Hetereogeneity in Mutation Content.
Kostadinov, Rumen; Maley, Carlo C; Kuhner, Mary K
2016-04-01
When multiple samples are taken from the neoplastic tissues of a single patient, it is natural to compare their mutation content. This is often done by bulk genotyping of whole biopsies, but the chance that a mutation will be detected in bulk genotyping depends on its local frequency in the sample. When the underlying mutation count per cell is equal, homogenous biopsies will have more high-frequency mutations, and thus more detectable mutations, than heterogeneous ones. Using simulations, we show that bulk genotyping of data simulated under a neutral model of somatic evolution generates strong spurious evidence for non-neutrality, because the pattern of tissue growth systematically generates differences in biopsy heterogeneity. Any experiment which compares mutation content across bulk-genotyped biopsies may therefore suggest mutation rate or selection intensity variation even when these forces are absent. We discuss computational and experimental approaches for resolving this problem.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
Test Results of a 200 GHz, Low Noise Downconverter for USAT Applications
NASA Technical Reports Server (NTRS)
Fujikawa, Gene (Compiler); Svoboda, James S.
1996-01-01
A key component in the development of the advanced communication technology satellite (ACTS) ultra small aperture terminal (USAT) earth station is the low noise down converter (LND). NASA Lewis Research Center has tested a version of an LND designed by Electrodyne Systems Corporation. A number of tests were conducted to characterize the radio frequency performance of the LND over temperature. The test results presented in this paper are frequency response, noise figure, gain, group delay, power transfer characteristics, image rejection, and spurious product suppression. The LND was one of several critical microwave subsystems developed and tested for the ACTS USAT Earth stations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaustad, KL; Turner, DD
2009-05-30
This report provides a short description of the Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) microwave radiometer (MWR) RETrievel (MWRRET) value-added product (VAP) algorithm. This algorithm utilizes a complementary physical retrieval method and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaustad, KL; Turner, DD; McFarlane, SA
2011-07-25
This report provides a short description of the Atmospheric Radiation Measurement (ARM) Climate Research Facility microwave radiometer (MWR) Retrieval (MWRRET) value-added product (VAP) algorithm. This algorithm utilizes a complementary physical retrieval method and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
Test results of a 20 GHz, low noise downconverter for USAT applications
NASA Technical Reports Server (NTRS)
Fujikawa, Gene; Svoboda, James S.
1995-01-01
A key component in the development of the Advanced Communications Technology Satellite (ACTS) ultra small aperture terminal (USAT) earth station is the low noise downconverter (NLD). NASA Lewis Research Center (LeRC) has tested a version of an LND designed by Electrodyne Systems Corporation. A number of tests were conducted to characterize the radio frequency performance of the LND over temperature. The test results presented in this paper are frequency response, noise figure, gain, group delay, power transfer characteristics, image rejection, and spurious product suppression. The LND was one of several critical microwave subsystems developed and tested for the ACTS USAT earth stations.
A resolvable subfilter-scale model specific to large-eddy simulation of under-resolved turbulence
NASA Astrophysics Data System (ADS)
Zhou, Yong; Brasseur, James G.; Juneja, Anurag
2001-09-01
Large-eddy simulation (LES) of boundary-layer flows has serious deficiencies near the surface when a viscous sublayer either does not exist (rough walls) or is not practical to resolve (high Reynolds numbers). In previous work, we have shown that the near-surface errors arise from the poor performance of algebraic subfilter-scale (SFS) models at the first several grid levels, where integral scales are necessarily under-resolved and the turbulence is highly anisotropic. In under-resolved turbulence, eddy viscosity and similarity SFS models create a spurious feedback loop between predicted resolved-scale (RS) velocity and modeled SFS acceleration, and are unable to simultaneously capture SFS acceleration and RS-SFS energy flux. To break the spurious coupling in a dynamically meaningful manner, we introduce a new modeling strategy in which the grid-resolved subfilter velocity is estimated from a separate dynamical equation containing the essential inertial interactions between SFS and RS velocity. This resolved SFS (RSFS) velocity is then used as a surrogate for the complete SFS velocity in the SFS stress tensor. We test the RSFS model by comparing LES of highly under-resolved anisotropic buoyancy-generated homogeneous turbulence with a corresponding direct numerical simulation (DNS). The new model successfully suppresses the spurious feedback loop between RS velocity and SFS acceleration, and greatly improves model predictions of the anisotropic structure of SFS acceleration and resolved velocity fields. Unlike algebraic models, the RSFS model accurately captures SFS acceleration intensity and RS-SFS energy flux, even during the nonequilibrium transient, and properly partitions SFS acceleration between SFS stress divergence and SFS pressure force.
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note.
Burgess, Adrian P
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated.
On the estimation of phase synchronization, spurious synchronization and filtering
NASA Astrophysics Data System (ADS)
Rios Herrera, Wady A.; Escalona, Joaquín; Rivera López, Daniel; Müller, Markus F.
2016-12-01
Phase synchronization, viz., the adjustment of instantaneous frequencies of two interacting self-sustained nonlinear oscillators, is frequently used for the detection of a possible interrelationship between empirical data recordings. In this context, the proper estimation of the instantaneous phase from a time series is a crucial aspect. The probability that numerical estimates provide a physically relevant meaning depends sensitively on the shape of its power spectral density. For this purpose, the power spectrum should be narrow banded possessing only one prominent peak [M. Chavez et al., J. Neurosci. Methods 154, 149 (2006)]. If this condition is not fulfilled, band-pass filtering seems to be the adequate technique in order to pre-process data for a posterior synchronization analysis. However, it was reported that band-pass filtering might induce spurious synchronization [L. Xu et al., Phys. Rev. E 73, 065201(R), (2006); J. Sun et al., Phys. Rev. E 77, 046213 (2008); and J. Wang and Z. Liu, EPL 102, 10003 (2013)], a statement that without further specification causes uncertainty over all measures that aim to quantify phase synchronization of broadband field data. We show by using signals derived from different test frameworks that appropriate filtering does not induce spurious synchronization. Instead, filtering in the time domain tends to wash out existent phase interrelations between signals. Furthermore, we show that measures derived for the estimation of phase synchronization like the mean phase coherence are also useful for the detection of interrelations between time series, which are not necessarily derived from coupled self-sustained nonlinear oscillators.
Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures.
Palva, J Matias; Wang, Sheng H; Palva, Satu; Zhigalov, Alexander; Monto, Simo; Brookes, Matthew J; Schoffelen, Jan-Mathijs; Jerbi, Karim
2018-06-01
When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Describing excited state relaxation and localization in TiO 2 nanoparticles using TD-DFT
Berardo, Enrico; Hu, Han -Shi; van Dam, Hubertus J. J.; ...
2014-02-26
We have investigated the description of excited state relaxation in naked and hydrated TiO 2 nanoparticles using Time-Dependent Density Functional Theory (TD-DFT) with three common hybrid exchange-correlation (XC) potentials; B3LYP, CAM-B3LYP and BHLYP. Use of TD-CAM-B3LYP and TD-BHLYP yields qualitatively similar results for all structures, which are also consistent with predictions of coupled cluster theory for small particles. TD-B3LYP, in contrast, is found to make rather different predictions; including apparent conical intersections for certain particles that are not observed with TD-CAM-B3LYP nor with TD-BHLYP. In line with our previous observations for vertical excitations, the issue with TD-B3LYP appears to bemore » the inherent tendency of TD-B3LYP, and other XC potentials with no or a low percentage of Hartree-Fock Like Exchange, to spuriously stabilize the energy of charge-transfer (CT) states. Even in the case of hydrated particles, for which vertical excitations are generally well described with all XC potentials, the use of TD-B3LYP appears to result in CT-problems for certain particles. We hypothesize that the spurious stabilization of CT-states by TD-B3LYP even may drive the excited state optimizations to different excited state geometries than those obtained using TD-CAM-B3LYP or TD-BHLYP. In conclusion, focusing on the TD-CAM-B3LYP and TD-BHLYP results, excited state relaxation in naked and hydrated TiO 2 nanoparticles is predicted to be associated with a large Stokes’ shift.« less
Buhule, Olive D; Minster, Ryan L; Hawley, Nicola L; Medvedovic, Mario; Sun, Guangyun; Viali, Satupaitea; Deka, Ranjan; McGarvey, Stephen T; Weeks, Daniel E
2014-01-01
Batch effects in DNA methylation microarray experiments can lead to spurious results if not properly handled during the plating of samples. Two pilot studies examining the association of DNA methylation patterns across the genome with obesity in Samoan men were investigated for chip- and row-specific batch effects. For each study, the DNA of 46 obese men and 46 lean men were assayed using Illumina's Infinium HumanMethylation450 BeadChip. In the first study (Sample One), samples from obese and lean subjects were examined on separate chips. In the second study (Sample Two), the samples were balanced on the chips by lean/obese status, age group, and census region. We used methylumi, watermelon, and limma R packages, as well as ComBat, to analyze the data. Principal component analysis and linear regression were, respectively, employed to identify the top principal components and to test for their association with the batches and lean/obese status. To identify differentially methylated positions (DMPs) between obese and lean males at each locus, we used a moderated t-test. Chip effects were effectively removed from Sample Two but not Sample One. In addition, dramatic differences were observed between the two sets of DMP results. After "removing" batch effects with ComBat, Sample One had 94,191 probes differentially methylated at a q-value threshold of 0.05 while Sample Two had zero differentially methylated probes. The disparate results from Sample One and Sample Two likely arise due to the confounding of lean/obese status with chip and row batch effects. Even the best possible statistical adjustments for batch effects may not completely remove them. Proper study design is vital for guarding against spurious findings due to such effects.
Describing excited state relaxation and localization in TiO 2 nanoparticles using TD-DFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berardo, Enrico; Hu, Han -Shi; van Dam, Hubertus J. J.
We have investigated the description of excited state relaxation in naked and hydrated TiO 2 nanoparticles using Time-Dependent Density Functional Theory (TD-DFT) with three common hybrid exchange-correlation (XC) potentials; B3LYP, CAM-B3LYP and BHLYP. Use of TD-CAM-B3LYP and TD-BHLYP yields qualitatively similar results for all structures, which are also consistent with predictions of coupled cluster theory for small particles. TD-B3LYP, in contrast, is found to make rather different predictions; including apparent conical intersections for certain particles that are not observed with TD-CAM-B3LYP nor with TD-BHLYP. In line with our previous observations for vertical excitations, the issue with TD-B3LYP appears to bemore » the inherent tendency of TD-B3LYP, and other XC potentials with no or a low percentage of Hartree-Fock Like Exchange, to spuriously stabilize the energy of charge-transfer (CT) states. Even in the case of hydrated particles, for which vertical excitations are generally well described with all XC potentials, the use of TD-B3LYP appears to result in CT-problems for certain particles. We hypothesize that the spurious stabilization of CT-states by TD-B3LYP even may drive the excited state optimizations to different excited state geometries than those obtained using TD-CAM-B3LYP or TD-BHLYP. In conclusion, focusing on the TD-CAM-B3LYP and TD-BHLYP results, excited state relaxation in naked and hydrated TiO 2 nanoparticles is predicted to be associated with a large Stokes’ shift.« less
NASA Astrophysics Data System (ADS)
Barabash, Sergey V.; Pramanik, Dipankar
2015-03-01
Development of low-leakage dielectrics for semiconductor industry, together with many other areas of academic and industrial research, increasingly rely upon ab initio tunneling and transport calculations. Complex band structure (CBS) is a powerful formalism to establish the nature of tunneling modes, providing both a deeper understanding and a guided optimization of materials, with practical applications ranging from screening candidate dielectrics for lowest ``ultimate leakage'' to identifying charge-neutrality levels and Fermi level pinning. We demonstrate that CBS is prone to a particular type of spurious ``phantom'' solution, previously deemed true but irrelevant because of a very fast decay. We demonstrate that (i) in complex materials, phantom modes may exhibit very slow decay (appearing as leading tunneling terms implying qualitative and huge quantitative errors), (ii) the phantom modes are spurious, (iii) unlike the pseudopotential ``ghost'' states, phantoms are an apparently unavoidable artifact of large numerical basis sets, (iv) a presumed increase in computational accuracy increases the number of phantoms, effectively corrupting the CBS results despite the higher accuracy achieved in resolving the true CBS modes and the real band structure, and (v) the phantom modes cannot be easily separated from the true CBS modes. We discuss implications for direct transport calculations. The strategy for dealing with the phantom states is discussed in the context of optimizing high-quality high- κ dielectric materials for decreased tunneling leakage.
Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.
2010-01-01
This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238
Hurford, Amy
2009-05-20
Movement data are frequently collected using Global Positioning System (GPS) receivers, but recorded GPS locations are subject to errors. While past studies have suggested methods to improve location accuracy, mechanistic movement models utilize distributions of turning angles and directional biases and these data present a new challenge in recognizing and reducing the effect of measurement error. I collected locations from a stationary GPS collar, analyzed a probabilistic model and used Monte Carlo simulations to understand how measurement error affects measured turning angles and directional biases. Results from each of the three methods were in complete agreement: measurement error gives rise to a systematic bias where a stationary animal is most likely to be measured as turning 180 degrees or moving towards a fixed point in space. These spurious effects occur in GPS data when the measured distance between locations is <20 meters. Measurement error must be considered as a possible cause of 180 degree turning angles in GPS data. Consequences of failing to account for measurement error are predicting overly tortuous movement, numerous returns to previously visited locations, inaccurately predicting species range, core areas, and the frequency of crossing linear features. By understanding the effect of GPS measurement error, ecologists are able to disregard false signals to more accurately design conservation plans for endangered wildlife.
An auxiliary optimization method for complex public transit route network based on link prediction
NASA Astrophysics Data System (ADS)
Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian
2018-02-01
Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
NASA Astrophysics Data System (ADS)
Haines, B. J.; Bar-Sever, Y. E.; Bertiger, W.; Desai, S.; Owen, S.; Sibois, A.; Webb, F.
2007-12-01
Treating the GRACE tandem mission as an orbiting fiducial laboratory, we have developed new estimates of the phase and group-delay variations of the GPS transmitter antennas. Application of these antenna phase variation (APV) maps have shown great promise in reducing previously unexplained errors in our realization of GPS measurements from the TOPEX/POSEIDON (T/P; 1992--2005) and Jason-1 (2001--) missions. In particular, a 56 mm vertical offset in the solved-for position of the T/P receiver antenna is reduced to insignificance (less than 1 mm). For Jason-1, a spurious long-term (4-yr) drift in the daily antenna offset estimates is reduced from +3.7 to +0.1 mm/yr. Prior ground-based results, based on precise point positioning, also hint at the potential of the GRACE-based APV maps for scale determination, reducing the spurious scale rate by one half. In this paper, we report on the latest APV estimates from GRACE, and provide a further assessment of the impact of the APV maps on realizing the scale of the terrestrial reference frame (TRF) from GPS alone. To address this, we re-analyze over five years of data from a global (40+ station) ground network in a fiducial-free approach, using the new APV maps. A specialized multi-day GPS satellite orbit determination (OD) strategy is employed to better capitalize on dynamical constraints. The resulting estimates of TRF scale are compared to ITRF2005 in order to assess the quality of the solutions.
Hyper-cooling in the nocturnal boundary layer: the Ramdas paradox
NASA Astrophysics Data System (ADS)
Mukund, V.; Ponnulakshmi, V. K.; Singh, D. K.; Subramanian, G.; Sreenivas, K. R.
2010-12-01
Characterizing the interaction between turbulence and radiative processes is necessary for understanding the nocturnal atmospheric boundary layer. The subtle nature of the interaction is exemplified in a phenomenon called the 'Ramdas paradox' or the 'lifted temperature minimum' (LTM), involving preferential cooling near the Earth's surface. The prevailing explanation for the LTM (the VSN model, Vasudeva Murthy et al (1993 Phil. Trans. R. Soc. A 344 183-206)) invokes radiative exchange in a homogeneous nocturnal atmosphere to predict a large cooling of the near-surface air layers. It is shown here that the cooling predicted by the VSN model is spurious, and that any preferential cooling can occur only in a heterogeneous atmosphere. The underlying error is fundamental, and occurs to varying degrees in a wide class of radiative models, in a flux-emissivity formulation, the VSN model being a prominent example. We, for the first time, propose the correct flux-emissivity formulation that eliminates spurious cooling. Results from field observations and laboratory experiments presented here, however, show that the near-surface radiative cooling is real; near-surface cooling rates can be orders of magnitude higher than values elsewhere in the boundary layer. The results presented include the dependence of the LTM on turbulence, the surface emissivity and the thermal inertia of the ground. It is proposed that aerosols provide the heterogeneity needed for the preferential cooling mechanism. Turbulence, by determining the aerosol concentration distribution over the relevant length scales, plays a key role in the phenomenon. Experimental evidence is presented to support this hypothesis.
Global Asymptotic Behavior of Iterative Implicit Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1994-01-01
The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.
Edge Probability and Pixel Relativity-Based Speckle Reducing Anisotropic Diffusion.
Mishra, Deepak; Chaudhury, Santanu; Sarkar, Mukul; Soin, Arvinder Singh; Sharma, Vivek
2018-02-01
Anisotropic diffusion filters are one of the best choices for speckle reduction in the ultrasound images. These filters control the diffusion flux flow using local image statistics and provide the desired speckle suppression. However, inefficient use of edge characteristics results in either oversmooth image or an image containing misinterpreted spurious edges. As a result, the diagnostic quality of the images becomes a concern. To alleviate such problems, a novel anisotropic diffusion-based speckle reducing filter is proposed in this paper. A probability density function of the edges along with pixel relativity information is used to control the diffusion flux flow. The probability density function helps in removing the spurious edges and the pixel relativity reduces the oversmoothing effects. Furthermore, the filtering is performed in superpixel domain to reduce the execution time, wherein a minimum of 15% of the total number of image pixels can be used. For performance evaluation, 31 frames of three synthetic images and 40 real ultrasound images are used. In most of the experiments, the proposed filter shows a better performance as compared to the state-of-the-art filters in terms of the speckle region's signal-to-noise ratio and mean square error. It also shows a comparative performance for figure of merit and structural similarity measure index. Furthermore, in the subjective evaluation, performed by the expert radiologists, the proposed filter's outputs are preferred for the improved contrast and sharpness of the object boundaries. Hence, the proposed filtering framework is suitable to reduce the unwanted speckle and improve the quality of the ultrasound images.
Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio
2017-01-01
Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034
Ezzati, M; Saleh, H; Kammen, D M
2000-01-01
Acute and chronic respiratory diseases, which are causally linked to exposure to indoor air pollution in developing countries, are the leading cause of global morbidity and mortality. Efforts to develop effective intervention strategies and detailed quantification of the exposure-response relationship for indoor particulate matter require accurate estimates of exposure. We used continuous monitoring of indoor air pollution and individual time-activity budget data to construct detailed profiles of exposure for 345 individuals in 55 households in rural Kenya. Data for analysis were from two hundred ten 14-hour days of continuous real-time monitoring of concentrations of particulate matter [less than/equal to] 10 microm in aerodynamic diameter and the location and activities of household members. These data were supplemented by data on the spatial dispersion of pollution and from interviews. Young and adult women had not only the highest absolute exposure to particulate matter (2, 795 and 4,898 microg/m(3) average daily exposure concentrations, respectively) but also the largest exposure relative to that of males in the same age group (2.5 and 4.8 times, respectively). Exposure during brief high-intensity emission episodes accounts for 31-61% of the total exposure of household members who take part in cooking and 0-11% for those who do not. Simple models that neglect the spatial distribution of pollution within the home, intense emission episodes, and activity patterns underestimate exposure by 3-71% for different demographic subgroups, resulting in inaccurate and biased estimations. Health and intervention impact studies should therefore consider in detail the critical role of exposure patterns, including the short periods of intense emission, to avoid spurious assessments of risks and benefits. PMID:11017887
Numerical solution of the electron transport equation
NASA Astrophysics Data System (ADS)
Woods, Mark
The electron transport equation has been solved many times for a variety of reasons. The main difficulty in its numerical solution is that it is a very stiff boundary value problem. The most common numerical methods for solving boundary value problems are symmetric collocation methods and shooting methods. Both of these types of methods can only be applied to the electron transport equation if the boundary conditions are altered with unrealistic assumptions because they require too many points to be practical. Further, they result in oscillating and negative solutions, which are physically meaningless for the problem at hand. For these reasons, all numerical methods for this problem to date are a bit unusual because they were designed to try and avoid the problem of extreme stiffness. This dissertation shows that there is no need to introduce spurious boundary conditions or invent other numerical methods for the electron transport equation. Rather, there already exists methods for very stiff boundary value problems within the numerical analysis literature. We demonstrate one such method in which the fast and slow modes of the boundary value problem are essentially decoupled. This allows for an upwind finite difference method to be applied to each mode as is appropriate. This greatly reduces the number of points needed in the mesh, and we demonstrate how this eliminates the need to define new boundary conditions. This method is verified by showing that under certain restrictive assumptions, the electron transport equation has an exact solution that can be written as an integral. We show that the solution from the upwind method agrees with the quadrature evaluation of the exact solution. This serves to verify that the upwind method is properly solving the electron transport equation. Further, it is demonstrated that the output of the upwind method can be used to compute auroral light emissions.
The ADER-DG method for seismic wave propagation and earthquake rupture dynamics
NASA Astrophysics Data System (ADS)
Pelties, Christian; Gabriel, Alice; Ampuero, Jean-Paul; de la Puente, Josep; Käser, Martin
2013-04-01
We will present the Arbitrary high-order DERivatives Discontinuous Galerkin (ADER-DG) method for solving the combined elastodynamic wave propagation and dynamic rupture problem. The ADER-DG method enables high-order accuracy in space and time while being implemented on unstructured tetrahedral meshes. A tetrahedral element discretization provides rapid and automatized mesh generation as well as geometrical flexibility. Features as mesh coarsening and local time stepping schemes can be applied to reduce computational efforts without introducing numerical artifacts. The method is well suited for parallelization and large scale high-performance computing since only directly neighboring elements exchange information via numerical fluxes. The concept of fluxes is a key ingredient of the numerical scheme as it governs the numerical dispersion and diffusion properties and allows to accommodate for boundary conditions, empirical friction laws of dynamic rupture processes, or the combination of different element types and non-conforming mesh transitions. After introducing fault dynamics into the ADER-DG framework, we will demonstrate its specific advantages in benchmarking test scenarios provided by the SCEC/USGS Spontaneous Rupture Code Verification Exercise. An important result of the benchmark is that the ADER-DG method avoids spurious high-frequency contributions in the slip rate spectra and therefore does not require artificial Kelvin-Voigt damping, filtering or other modifications of the produced synthetic seismograms. To demonstrate the capabilities of the proposed scheme we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes branching and curved fault segments. Furthermore, topography is respected in the discretized model to capture the surface waves correctly. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies.
Pharmacogenetics in the Brazilian Population
Suarez-Kurtz, Guilherme
2010-01-01
Brazil is the fifth largest country in the world and its present population, in excess of 190;million, is highly heterogeneous, as a result of centuries of admixture between Amerindians, Europeans, and Sub-Saharan Africans. The estimated individual proportions of biogeographical ancestry vary widely and continuously among Brazilians: most individuals, irrespective of self-identification as White, Brown or Black – the major categories of the Brazilian Census “race/color” system – have significant degrees of European and African ancestry, while a sizeable number display also Amerindian ancestry. These features have important pharmacogenetic (PGx) implications: first, extrapolation of PGx data from relatively well-defined ethnic groups is clearly not applicable to the majority of Brazilians; second, the frequency distribution of polymorphisms in pharmacogenes (e.g., CYP3A5, CYP2C9, GSTM1, ABCB1, GSTM3, VKORC, etc) varies continuously among Brazilians and is not captured by race/color self-identification; third, the intrinsic heterogeneity of the Brazilian population must be acknowledged in the design and interpretation of PGx studies in order to avoid spurious conclusions based on improper matching of study cohorts. The peculiarities of PGx in Brazilians are illustrated with data for different therapeutic groups, such as anticoagulants, HIV protease inhibitors and non-steroidal antinflammatory drugs, and the challenges and advantages created by population admixture for the study and implementation of PGx are discussed. PGx data for Amerindian groups and Brazilian-born, first-generation Japanese are presented to illustrate the rich diversity of the Brazilian population. Finally, I introduce the reader to the Brazilian Pharmacogenetic Network or Refargen1, a nation-wide consortium of research groups, with the mission to provide leadership in PGx research and education in Brazil, with a population health impact. PMID:21833165
The invisible hand: how British American Tobacco precluded competition in Uzbekistan
Gilmore, Anna B; McKee, Martin; Collin, Jeff
2007-01-01
Background Tobacco industry documents provide a unique opportunity to explore the role transnational corporations (TNCs) played in shaping the poor outcomes of privatisation in the former Soviet Union (FSU). This paper examines British American Tobacco's (BAT's) business conduct in Uzbekistan where large‐scale smuggling of BAT's cigarettes, BAT's reversal of tobacco control legislation and its human rights abuses of tobacco farmers have been documented previously. This paper focuses, instead, on BAT's attitude to competition, compares BAT's conduct with international standards and assesses its influence on the privatisation process. Methods Analysis of BAT documents released through litigation. Results BAT secured sole negotiator status precluding the Uzbekistan government from initiating discussions with other parties. Recognising that a competitive tender would greatly increase the cost of investment, BAT went to great lengths to avoid one, ultimately securing President Karimov's support and negotiating a monopoly position in a closed deal. It simultaneously secured exclusion from the monopolies committee, ensuring freedom to set prices, on the basis of a spurious argument that competition would exist from imports. Other anticompetitive moves comprised including all three plants in the deal despite intending to close down two, exclusive dealing and implementing measures designed to prevent market entry by competitors. BAT also secured a large number of exemptions and privileges that further reduced the government's revenue both on a one‐off and ongoing basis. Conclusions BAT's corporate misbehaviour included a wide number of anticompetitive practices, contravened Organisation of Economic Cooperation and Development's and BAT's own business standards on competition and restricted revenue arising from privatisation. This suggests that TNCs have contributed to the failure of privatisation in the FSU. Conducting open tenders and using enforceable codes to regulate corporate conduct would help deal with some of the problems identified. PMID:17652239
ISO Key Project: Exploring the full range of QUASAR/AGN properties
NASA Technical Reports Server (NTRS)
Wilkes, B.
1998-01-01
The PIA (PHOT Interactive Analysis) software was upgraded as new releases were made available by VILSPA. We have continued to analyze our data but, given the large number of still outstanding problems with the calibration and analysis (listed below), we remain unable to move forward on our scientific program. We have concentrated on observations with long (256 sec) exposure times to avoid the most extreme detector responsivity drift problems which occur with a change in observed flux level, ie. as one begins to observe a new target. There remain a significant number of problems with analyzing these data including: (1) the default calibration source (FCS) observations early in the mission were too short and affected by strong detector responsivity drifts; (2) the calibration of the FCS sources is not yet well-understood, particularly for chopped observations (which includes most of ours); (3) the detector responsivity drift is not well-understood and models are only now becoming available for fitting chopped data; (4) charged particle hits on the detector cause transient responsivity drifts which need to be corrected; (5) the "flat-field" calibration of the long-wavelength (array) detectors: C1OO, C200 leaves significant residual structure and so needs to be improved;(6) the vignetting correction, which affects detected flux levels in the array detectors, is not yet available; (7) the intra-filter calibrations are not yet available; and (8) the background above 60 microns has a significant gradient which results in spurious positive and negative "detections" in chopped observations. ISO Observation planning, conferences and talks, ground based observing and other grant related activities are also briefly discussed.
Origin of the quasiparticle peak in the spectral density of Cr(001) surfaces
NASA Astrophysics Data System (ADS)
Peters, L.; Jacob, D.; Karolak, M.; Lichtenstein, A. I.; Katsnelson, M. I.
2017-12-01
In the spectral density of Cr(001) surfaces, a sharp resonance close to the Fermi level is observed in both experiment and theory. For the physical origin of this peak, two mechanisms were proposed: a single-particle dz2 surface state renormalized by electron-phonon coupling and an orbital Kondo effect due to the degenerate dx z/dy z states. Despite several experimental and theoretical investigations, the origin is still under debate. In this work, we address this problem by two different approaches of the dynamical mean-field theory: first, by the spin-polarized T -matrix fluctuation exchange approximation suitable for weakly and moderately correlated systems; second, by the noncrossing approximation derived in the limit of weak hybridization (i.e., for strongly correlated systems) capturing Kondo-type processes. By using recent continuous-time quantum Monte Carlo calculations as a benchmark, we find that the high-energy features, everything except the resonance, of the spectrum are captured within the spin-polarized T -matrix fluctuation exchange approximation. More precisely, the particle-particle processes provide the main contribution. For the noncrossing approximation, it appears that spin-polarized calculations suffer from spurious behavior at the Fermi level. Then, we turned to non-spin-polarized calculations to avoid this unphysical behavior. By employing two plausible starting hybridization functions, it is observed that the characteristics of the resonance are crucially dependent on the starting point. It appears that only one of these starting hybridizations could result in an orbital Kondo resonance in the presence of a strong magnetic field like in the Cr(001) surface. It is for a future investigation to first resolve the unphysical behavior within the spin-polarized noncrossing approximation and then check for an orbital Kondo resonance.
NASA Technical Reports Server (NTRS)
Mankbadi, Reda
2001-01-01
Dr. Mankbadi summarized recent CAA results. Examples of the effect of various boundary condition schemes on the computed acoustic field, for a point source in a uniform flow, were shown. Solutions showing the impact of inflow excitations on the result were also shown. Results from a large eddy simulation, using a fourth-order MacCormack scheme with a Smagorinsky sub-grid turbulence model, were shown for a Mach 2.1 unheated jet. The results showed that the results were free from spurious modes. Results were shown for a Mach 1.4 jet using LES in the near field and the Kirchhoff method for the far field. Predicted flow field characteristics were shown to be in good agreement with data and predicted far field directivities were shown to be in qualitative agree with experimental measurements.
CAA for Jet Noise Physics: Issues and Recent Progress
NASA Technical Reports Server (NTRS)
Mankbadi, Reda
2001-01-01
Dr. Mankbadi summarized recent CAA results. Examples of the effect of various boundary condition schemes on the computed acoustic field, for a point source in a uniform flow, were shown. Solutions showing the impact of inflow excitations on the result were also shown. Results from a large eddy simulation, using a fourth-order MacCormack scheme with a Smagorinsky sub-grid turbulence model, were shown for a Mach 2.1 unheated jet. The results showed that the results were free from spurious modes. Results were shown for a Mach 1.4 jet using LES in the near field and the Kirchhoff method for the far field. Predicted flow field characteristics were shown to be in good agreement with data and predicted far field directivities were shown to be in qualitative agree with experimental measurements.
An arc control and protection system for the JET lower hybrid antenna based on an imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueiredo, J., E-mail: joao.figueiredo@jet.efda.org; Mailloux, J.; Kirov, K.
Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguidesmore » facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.« less
A STUDY OF THE INDIGOGENIC PRINCIPLE AND IN VITRO MACROPHAGE DIFFERENTIATION
and beta- glucuronidase activities. Moreover, there was a progressive increase in the densities of enzyme reactive centers. Indigo reaction product was...not observed over nuclei; lipid droplets and cell background were free from spurious precipitations. Both galactosidase and glucuronidase were
Tree demography dominates long-term growth trends inferred from tree rings.
Brienen, Roel J W; Gloor, Manuel; Ziv, Guy
2017-02-01
Understanding responses of forests to increasing CO 2 and temperature is an important challenge, but no easy task. Tree rings are increasingly used to study such responses. In a recent study, van der Sleen et al. (2014) Nature Geoscience, 8, 4 used tree rings from 12 tropical tree species and find that despite increases in intrinsic water use efficiency, no growth stimulation is observed. This challenges the idea that increasing CO 2 would stimulate growth. Unfortunately, tree ring analysis can be plagued by biases, resulting in spurious growth trends. While their study evaluated several biases, it does not account for all. In particular, one bias may have seriously affected their results. Several of the species have recruitment patterns, which are not uniform, but clustered around one specific year. This results in spurious negative growth trends if growth rates are calculated in fixed size classes, as 'fast-growing' trees reach the sampling diameter earlier compared to slow growers and thus fast growth rates tend to have earlier calendar dates. We assessed the effect of this 'nonuniform age bias' on observed growth trends and find that van der Sleen's conclusions of a lack of growth stimulation do not hold. Growth trends are - at least partially - driven by underlying recruitment or age distributions. Species with more clustered age distributions show more negative growth trends, and simulations to estimate the effect of species' age distributions show growth trends close to those observed. Re-evaluation of the growth data and correction for the bias result in significant positive growth trends of 1-2% per decade for the full period, and 3-7% since 1950. These observations, however, should be taken cautiously as multiple biases affect these trend estimates. In all, our results highlight that tree ring studies of long-term growth trends can be strongly influenced by biases if demographic processes are not carefully accounted for. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.
Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less
A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies
Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; ...
2017-10-01
Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less
Groenendyk, Eric
2016-01-01
Affective Intelligence Theory (AIT) asserts that anxiety reduces the effect of party identification on candidate preferences (Marcus, Neuman, and MacKuen 2000), but recent studies have raised doubts about this causal claim. Rather than functioning as a moderator of party identification, perhaps anxiety has a direct effect on preferences, or perhaps the relationship is reversed and preferences drive emotions (Ladd and Lenz 2008). Alternatively, Marcus et al.'s measure of anxiety may simply be capturing partisan ambivalence, so the posited relationship is spurious (Lavine, Johnston, and Steenbergen 2012). This paper addresses each of these questions by examining the effect of experimentally induced emotions on the types of considerations that came to mind when a national sample of adult Americans was asked what they liked and disliked about Barack Obama. By directly manipulating anxiety, this experiment avoids the causal ambiguity plaguing this debate and ascertains the true nature of the relationship between anxiety and ambivalence. Consistent with AIT, anxiety led respondents to recall more contemporary considerations, whereas enthusiasm brought to mind more long-standing considerations. Because the political context at the time of the study (fall 2013) was a very tumultuous time for the Obama administration, the increased accessibility of contemporary considerations led Democratic participants to experience more ambivalence in the anxiety condition. This effect was concentrated among those Democrats who were exposed to the most newspaper coverage.
Automatic physical inference with information maximizing neural networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
The Anxious and Ambivalent Partisan
Groenendyk, Eric
2016-01-01
Affective Intelligence Theory (AIT) asserts that anxiety reduces the effect of party identification on candidate preferences (Marcus, Neuman, and MacKuen 2000), but recent studies have raised doubts about this causal claim. Rather than functioning as a moderator of party identification, perhaps anxiety has a direct effect on preferences, or perhaps the relationship is reversed and preferences drive emotions (Ladd and Lenz 2008). Alternatively, Marcus et al.’s measure of anxiety may simply be capturing partisan ambivalence, so the posited relationship is spurious (Lavine, Johnston, and Steenbergen 2012). This paper addresses each of these questions by examining the effect of experimentally induced emotions on the types of considerations that came to mind when a national sample of adult Americans was asked what they liked and disliked about Barack Obama. By directly manipulating anxiety, this experiment avoids the causal ambiguity plaguing this debate and ascertains the true nature of the relationship between anxiety and ambivalence. Consistent with AIT, anxiety led respondents to recall more contemporary considerations, whereas enthusiasm brought to mind more long-standing considerations. Because the political context at the time of the study (fall 2013) was a very tumultuous time for the Obama administration, the increased accessibility of contemporary considerations led Democratic participants to experience more ambivalence in the anxiety condition. This effect was concentrated among those Democrats who were exposed to the most newspaper coverage. PMID:27274573
2016-01-01
Avoiding complementarity between primers when designing a PCR assay constitutes a central rule strongly anchored in the mind of the molecular scientist. 3’-complementarity will extend the primers during PCR elongation using one another as template, consequently disabling further possible involvement in traditional target amplification. However, a 5’-complementarity will leave the primers unchanged during PCR cycles, albeit sequestered to one another, therefore also suppressing target amplification. We show that 5’-complementarity between primers may be exploited in a new PCR method called COMplementary-Primer-Asymmetric (COMPAS)-PCR, using asymmetric primer concentrations to achieve target PCR amplification. Moreover, such a design may paradoxically reduce spurious non-target amplification by actively sequestering the limiting primer. The general principles were demonstrated using 5S rDNA direct repeats as target sequences to design a species-specific assay for identifying Salmo salar and Salmo trutta using almost fully complementary primers overlapping the same target sequence. Specificity was enhanced by using 3’-penultimate point mutations and the assay was further developed to enable identification of S. salar x S. trutta hybrids by High Resolution Melt analysis in a 35 min one-tube assay. This small paradigm shift, using highly complementary primers for PCR, should help develop robust assays that previously would not be considered. PMID:27783658
Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.
2011-10-01
We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of interpolated quantities in the solution of the Riemann Problem. The results of our tests demonstrate that GSPH is in fact a highly promising hydrodynamic scheme, also to be coupled to an N-body solver, for astrophysical and cosmological applications.
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note
Burgess, Adrian P.
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. PMID:24399948
Shallow marine cloud topped boundary layer in atmospheric models
NASA Astrophysics Data System (ADS)
Janjic, Zavisa
2017-04-01
A common problem in many atmospheric models is excessive expansion over cold water of shallow marine planetary boundary layer (PBL) topped by a thin cloud layer. This phenomenon is often accompanied by spurious light precipitation. The "Cloud Top Entrainment Instability" (CTEI) was proposed as an explanation of the mechanism controlling this process in reality thereby preventing spurious enlargement of the cloudy area and widely spread light precipitation observed in the models. A key element of this hypothesis is evaporative cooling at the PBL top. However, the CTEI hypothesis remains controversial. For example, a recent direct simulation experiment indicated that the evaporative cooling couldn't explain the break-up of the cloudiness as hypothesized by the CTEI. Here, it is shown that the cloud break-up can be achieved in numerical models by a further modification of the nonsingular implementation of the Mellor-Yamada Level 2.5 turbulence closure model (MYJ) developed at the National Centers for Environmental Prediction (NCEP) Washington. Namely, the impact of moist convective instability is included into the turbulent energy production/dissipation equation if (a) the stratification is stable, (b) the lifting condensation level (LCL) for a particle starting at a model level is below the next upper model level, and (c) there is enough turbulent kinetic energy so that, due to random vertical turbulent motions, a particle starting from a model level can reach its LCL. The criterion (c) should be sufficiently restrictive because otherwise the cloud cover can be completely removed. A real data example will be shown demonstrating the ability of the method to break the spurious cloud cover during the day, but also to allow its recovery over night.
Modeling of Shallow Marine Cloud Topped Boundary Layer
NASA Astrophysics Data System (ADS)
Janjic, Z.
2017-12-01
A common problem in many atmospheric models is excessive expansion over cold water of shallow marine planetary boundary layer (PBL) topped by a thin cloud layer. This phenomenon is often accompanied by spurious light precipitation. The "Cloud Top Entrainment Instability" (CTEI) was proposed as an explanation of the mechanism controlling this process and thus preventing spurious enlargement of the cloudy area and widely spread light precipitation observed in the models. A key element of this hypothesis is evaporative cooling at the PBL top. However, the CTEI hypothesis remains controversial. For example, a recent direct simulation experiment indicated that the evaporative cooling couldn't explain the break-up of the cloudiness as hypothesized by the CTEI. Here, it is shown that the cloud break-up can be achieved in numerical models by a further modification of the nonsingular implementation of the nonsingular Mellor-Yamada Level 2.5 turbulence closure model (MYJ) developed at the National Centers for Environmental Prediction (NCEP) Washington. Namely, the impact of moist convective instability is included into the turbulent energy production/dissipation equation if (a) the stratification is stable, (b) the lifting condensation level (LCL) for a particle starting at a model level is below the next upper model level, and (c) there is enough turbulent kinetic energy so that, due to random vertical turbulent motions, a particle starting from a model level can reach its LCL. The criterion (c) should be sufficiently restrictive because otherwise the cloud cover can be completely removed. A real data example will be shown demonstrating the ability of the method to break the spurious cloud cover during the day, but also to allow its recovery over night.
On the nature and correction of the spurious S-wise spiral galaxy winding bias in Galaxy Zoo 1
NASA Astrophysics Data System (ADS)
Hayes, Wayne B.; Davis, Darren; Silva, Pedro
2017-04-01
The Galaxy Zoo 1 catalogue displays a bias towards the S-wise winding direction in spiral galaxies, which has yet to be explained. The lack of an explanation confounds our attempts to verify the Cosmological Principle, and has spurred some debate as to whether a bias exists in the real Universe. The bias manifests not only in the obvious case of trying to decide if the universe as a whole has a winding bias, but also in the more insidious case of selecting which Galaxies to include in a winding direction survey. While the former bias has been accounted for in a previous image-mirroring study, the latter has not. Furthermore, the bias has never been corrected in the GZ1 catalogue, as only a small sample of the GZ1 catalogue was reexamined during the mirror study. We show that the existing bias is a human selection effect rather than a human chirality bias. In effect, the excess S-wise votes are spuriously 'stolen' from the elliptical and edge-on-disc categories, not the Z-wise category. Thus, when selecting a set of spiral galaxies by imposing a threshold T so that max (PS, PZ) > T or PS + PZ > T, we spuriously select more S-wise than Z-wise galaxies. We show that when a provably unbiased machine selects which galaxies are spirals independent of their chirality, the S-wise surplus vanishes, even if humans still determine the chirality. Thus, when viewed across the entire GZ1 sample (and by implication, the Sloan catalogue), the winding direction of arms in spiral galaxies as viewed from Earth is consistent with the flip of a fair coin.
Recent Global Warming as Observed by AIRS and Depicted in GISSTEMP and MERRA-2
NASA Technical Reports Server (NTRS)
Susskind, Joel; Lee, Jae; Iredell, Lena
2017-01-01
AIRS Version-6 monthly mean level-3 surface temperature products confirm the result, depicted in the GISSTEMP dataset, that the earth's surface temperature has been warming since early 2015, though not before that. AIRS is at a higher spatial resolution than GISSTEMP, and produces sharper spatial features which are otherwise in excellent agreement with those of GISSTEMP. Version-6 AO Ts anomalies are consistent with those of Version-6 AIRS/AMSU. Version-7 AO anomalies should be even more accurate, especially at high latitudes. ARCs of MERRA-2 Ts anomalies are spurious as a result of a discontinuity which occurred somewhere between 2007 and 2008. This decreases global mean trends.
Limitations of the method of complex basis functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumel, R.T.; Crocker, M.C.; Nuttall, J.
1975-08-01
The method of complex basis functions proposed by Rescigno and Reinhardt is applied to the calculation of the amplitude in a model problem which can be treated analytically. It is found for an important class of potentials, including some of infinite range and also the square well, that the method does not provide a converging sequence of approximations. However, in some cases, approximations of relatively low order might be close to the correct result. The method is also applied to S-wave e-H elastic scattering above the ionization threshold, and spurious ''convergence'' to the wrong result is found. A procedure whichmore » might overcome the difficulties of the method is proposed.« less
Shardell, Michelle; Harris, Anthony D; El-Kamary, Samer S; Furuno, Jon P; Miller, Ram R; Perencevich, Eli N
2007-10-01
Quasi-experimental study designs are frequently used to assess interventions that aim to limit the emergence of antimicrobial-resistant pathogens. However, previous studies using these designs have often used suboptimal statistical methods, which may result in researchers making spurious conclusions. Methods used to analyze quasi-experimental data include 2-group tests, regression analysis, and time-series analysis, and they all have specific assumptions, data requirements, strengths, and limitations. An example of a hospital-based intervention to reduce methicillin-resistant Staphylococcus aureus infection rates and reduce overall length of stay is used to explore these methods.
Evaluation of a Mobile Phone for Aircraft GPS Interference
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.
2004-01-01
Measurements of spurious emissions from a mobile phone are conducted in a reverberation chamber for the Global Positioning System (GPS) radio frequency band. This phone model was previously determined to have caused interference to several aircraft GPS receivers. Interference path loss (IPL) factors are applied to the emission data, and the outcome compared against GPS receiver susceptibility. The resulting negative safety margins indicate there are risks to aircraft GPS systems. The maximum emission level from the phone is also shown to be comparable with some laptop computer's emissions, implying that laptop computers can provide similar risks to aircraft GPS receivers.
8-PSK Signaling over non-linear satellite channels
NASA Technical Reports Server (NTRS)
Horan, Sheila B.; Caballero, Ruben B. Eng.
1996-01-01
Space agencies are under pressure to utilize better bandwidth-efficient communication methods due to the actual allocated frequency bands becoming more congested. Also budget reductions is another problem that the space agencies must deal with. This budget constraint results in simpler spacecraft carrying less communication capabilities and also the reduction in staff to capture data in the earth stations. It is then imperative that the most bandwidth efficient communication methods be utilized. This thesis presents a study of 8-ary Phase Shift Keying (8PSK) modulation with respect to bandwidth, power efficiency, spurious emissions and interference susceptibility over a non-linear satellite channel.
Speckle Interferometry at SOAR in 2016 and 2017
NASA Astrophysics Data System (ADS)
Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.; Mendez, Rene A.; Horch, Elliott P.
2018-06-01
The results of speckle interferometric observations at the 4.1 m Southern Astrophysical Research Telescope in 2016 and 2017 are given, totaling 2483 measurements of 1570 resolved pairs and 609 non-resolutions. We describe briefly recent changes in the instrument and observing method and quantify the accuracy of the pixel scale and position angle calibration. Comments are given on 44 pairs resolved here for the first time. The orbital motion of the newly resolved subsystem BU 83 Aa,Ab roughly agrees with its 36-year astrometric orbit proposed by J. Dommanget. Most Tycho binaries examined here turned out to be spurious.
Algorithm For Hypersonic Flow In Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.
NASA Technical Reports Server (NTRS)
Sohn, J. L.; Heinrich, J. C.
1990-01-01
The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.
Heuristic algorithm for optical character recognition of Arabic script
NASA Astrophysics Data System (ADS)
Yarman-Vural, Fatos T.; Atici, A.
1996-02-01
In this paper, a heuristic method is developed for segmentation, feature extraction and recognition of the Arabic script. The study is part of a large project for the transcription of the documents in Ottoman Archives. A geometrical and topological feature analysis method is developed for segmentation and feature extraction stages. Chain code transformation is applied to main strokes of the characters which are then classified by the hidden Markov model (HMM) in the recognition stage. Experimental results indicate that the performance of the proposed method is impressive, provided that the thinning process does not yield spurious branches.
Accelerated stress testing of amorphous silicon solar cells
NASA Technical Reports Server (NTRS)
Stoddard, W. G.; Davis, C. W.; Lathrop, J. W.
1985-01-01
A technique for performing accelerated stress tests of large-area thin a-Si solar cells is presented. A computer-controlled short-interval test system employing low-cost ac-powered ELH illumination and a simulated a-Si reference cell (seven individually bandpass-filtered zero-biased crystalline PIN photodiodes) calibrated to the response of an a-Si control cell is described and illustrated with flow diagrams, drawings, and graphs. Preliminary results indicate that while most tests of a program developed for c-Si cells are applicable to a-Si cells, spurious degradation may appear in a-Si cells tested at temperatures above 130 C.
Ab initio method for calculating total cross sections
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Schneider, B. I.; Temkin, A.
1993-01-01
A method for calculating total cross sections without formally including nonelastic channels is presented. The idea is to use a one channel T-matrix variational principle with a complex correlation function. The derived T matrix is therefore not unitary. Elastic scattering is calculated from T-parallel-squared, but total scattering is derived from the imaginary part of T using the optical theorem. The method is applied to the spherically symmetric model of electron-hydrogen scattering. No spurious structure arises; results for sigma(el) and sigma(total) are in excellent agreement with calculations of Callaway and Oza (1984). The method has wide potential applicability.
Absolute spectrophotometry of Wolf-Rayet stars from 1200 to 7000 A - A cautionary tale
NASA Technical Reports Server (NTRS)
Garmany, C. D.; Conti, P. S.; Massey, P.
1984-01-01
It is demonstrated that absolute spectrophotometry of the continua of Wolf-Rayet stars may be obtained over the wavelength range 1200-7000 A using IUE and optical measurements. It is shown that the application of a 'standard' reddening law to the observed data gives spurious results in many cases. Additional UV extinction is apparently necessary and may well be circumstellar in origin. In such hot stars, the long-wavelength 'tail' of the emergent stellar continuum are measured. The inadequacy of previous attempts to determine intrinsic continua and effective temperatures of Wolf-Rayet stars is pointed out.
The determination of surface albedo from meteorological satellites
NASA Technical Reports Server (NTRS)
Johnson, W. T.
1977-01-01
A surface albedo was determined from visible data collected by the NOAA-4 polar orbiting meteorological satellite. To filter out the major cause of atmospheric reflectivity, namely clouds, techniques were developed and applied to the data resulting in a map of global surface albedo. Neglecting spurious surface albedos for regions with persistent cloud cover, sun glint effects, insufficient reflected light and, at this time, some unresolved influences, the surface albedos retrieved from satellite data closely matched those of a global surface albedo map produced from surface and aircraft measurements and from characteristic albedos for land type and land use.
A Loss Tolerant Rate Controller for Reliable Multicast
NASA Technical Reports Server (NTRS)
Montgomery, Todd
1997-01-01
This paper describes the design, specification, and performance of a Loss Tolerant Rate Controller (LTRC) for use in controlling reliable multicast senders. The purpose of this rate controller is not to adapt to congestion (or loss) on a per loss report basis (such as per received negative acknowledgment), but instead to use loss report information and perceived state to decide more prudent courses of action for both the short and long term. The goal of this controller is to be responsive to congestion, but not overly reactive to spurious independent loss. Performance of the controller is verified through simulation results.
Orientation of doubly rotated quartz plates.
Sherman, J R
1989-01-01
A derivation from classical spherical trigonometry of equations to compute the orientation of doubly-rotated quartz blanks from Bragg X-ray data is discussed. These are usually derived by compact and efficient vector methods, which are reviewed briefly. They are solved by generating a quadratic equation with numerical coefficients. Two methods exist for performing the computation from measurements against two planes: a direct solution by a quadratic equation and a process of convergent iteration. Both have a spurious solution. Measurement against three lattice planes yields a set of three linear equations the solution of which is an unambiguous result.
In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...
Self-shielding printed circuit boards for high frequency amplifiers and transmitters
NASA Technical Reports Server (NTRS)
Galvin, D.
1969-01-01
Printed circuit boards retaining as much copper as possible provide electromagnetic shielding between stages of the high frequency amplifiers and transmitters. Oscillation is prevented, spurious output signals are reduced, and multiple stages are kept isolated from each other, both thermally and electrically.
Assessing Spurious Interaction Effects in Structural Equation Modeling
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Weiss, Brandi A.; Li, Ming
2015-01-01
Several studies have stressed the importance of simultaneously estimating interaction and quadratic effects in multiple regression analyses, even if theory only suggests an interaction effect should be present. Specifically, past studies suggested that failing to simultaneously include quadratic effects when testing for interaction effects could…
NASA Astrophysics Data System (ADS)
Mahéo, Laurent; Grolleau, Vincent; Rio, Gérard
2009-11-01
To deal with dynamic and wave propagation problems, dissipative methods are often used to reduce the effects of the spurious oscillations induced by the spatial and time discretization procedures. Among the many dissipative methods available, the Tchamwa-Wielgosz (TW) explicit scheme is particularly useful because it damps out the spurious oscillations occurring in the highest frequency domain. The theoretical study performed here shows that the TW scheme is decentered to the right, and that the damping can be attributed to a nodal displacement perturbation. The FEM study carried out using instantaneous 1-D and 3-D compression loads shows that it is useful to display the damping versus the number of time steps in order to obtain a constant damping efficiency whatever the size of element used for the regular meshing. A study on the responses obtained with irregular meshes shows that the TW scheme is only slightly sensitive to the spatial discretization procedure used. To cite this article: L. Mahéo et al., C. R. Mecanique 337 (2009).
Long-term behaviour and cross-correlation water quality analysis of the River Elbe, Germany.
Lehmann, A; Rode, M
2001-06-01
This study analyses weekly data samples from the river Elbe at Magdeburg between 1984 and 1996 to investigate the changes in metabolism and water quality in the river Elbe since the German reunification in 1990. Modelling water quality variables by autoregressive component models and ARIMA models reveals the improvement of water quality due to the reduction of waste water emissions since 1990. The models are used to determine the long-term and seasonal behaviour of important water quality variables. Organic and heavy metal pollution parameters showed a significant decrease since 1990, however, no significant change of chlorophyll-a as a measure for primary production could be found. A new procedure for testing the significance of a sample correlation coefficient is discussed, which is able to detect spurious sample correlation coefficients without making use of time-consuming prewhitening. The cross-correlation analysis is applied to hydrophysical, biological, and chemical water quality variables of the river Elbe since 1984. Special emphasis is laid on the detection of spurious sample correlation coefficients.
Charge-Dissipative Electrical Cables
NASA Technical Reports Server (NTRS)
Kolasinski, John R.; Wollack, Edward J.
2004-01-01
Electrical cables that dissipate spurious static electric charges, in addition to performing their main functions of conducting signals, have been developed. These cables are intended for use in trapped-ion or ionizing-radiation environments, in which electric charges tend to accumulate within, and on the surfaces of, dielectric layers of cables. If the charging rate exceeds the dissipation rate, charges can accumulate in excessive amounts, giving rise to high-current discharges that can damage electronic circuitry and/or systems connected to it. The basic idea of design and operation of charge-dissipative electrical cables is to drain spurious charges to ground by use of lossy (slightly electrically conductive) dielectric layers, possibly in conjunction with drain wires and/or drain shields (see figure). In typical cases, the drain wires and/or drain shields could be electrically grounded via the connector assemblies at the ends of the cables, in any of the conventional techniques for grounding signal conductors and signal shields. In some cases, signal shields could double as drain shields.
Optoelectronic Terminal-Attractor-Based Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Barhen, Jacob; Farhat, Nabil H.
1994-01-01
Report presents theoretical and experimental study of optically and electronically addressable optical implementation of artificial neural network that performs associative recall. Shows by computer simulation that terminal-attractor-based associative memory can have perfect convergence in associative retrieval and increased storage capacity. Spurious states reduced by exploiting terminal attractors.
Construct Meaning in Multilevel Settings
ERIC Educational Resources Information Center
Stapleton, Laura M.; Yang, Ji Seung; Hancock, Gregory R.
2016-01-01
We present types of constructs, individual- and cluster-level, and their confirmatory factor analytic validation models when data are from individuals nested within clusters. When a construct is theoretically individual level, spurious construct-irrelevant dependency in the data may appear to signal cluster-level dependency; in such cases,…
NASA Technical Reports Server (NTRS)
Booth, Gary N.; Malinzak, R. Michael
1990-01-01
Treatment similar to dental polishing used to remove microfissures from metal parts without reworking adjacent surfaces. Any variety of abrasive tips attached to small motor used to grind spot treated. Configuration of grinding head must be compatible with configurations of motor and workpiece. Devised to eliminate spurious marks on welded parts.
Detecting Multifractal Properties in Asset Returns:
NASA Astrophysics Data System (ADS)
Lux, Thomas
It has become popular recently to apply the multifractal formalism of statistical physics (scaling analysis of structure functions and f(α) singularity spectrum analysis) to financial data. The outcome of such studies is a nonlinear shape of the structure function and a nontrivial behavior of the spectrum. Eventually, this literature has moved from basic data analysis to estimation of particular variants of multifractal models for asset returns via fitting of the empirical τ(q) and f(α) functions. Here, we reinvestigate earlier claims of multifractality using four long time series of important financial markets. Taking the recently proposed multifractal models of asset returns as our starting point, we show that the typical "scaling estimators" used in the physics literature are unable to distinguish between spurious and "true" multiscaling of financial data. Designing explicit tests for multiscaling, we can in no case reject the null hypothesis that the apparent curvature of both the scaling function and the Hölder spectrum are spuriously generated by the particular fat-tailed distribution of financial data. Given the well-known overwhelming evidence in favor of different degrees of long-term dependence in the powers of returns, we interpret this inability to reject the null hypothesis of multiscaling as a lack of discriminatory power of the standard approach rather than as a true rejection of multiscaling. However, the complete "failure" of the multifractal apparatus in this setting also raises the question whether results in other areas (like geophysics) suffer from similar shortcomings of the traditional methodology.
NASA Astrophysics Data System (ADS)
Israel, Holger; Massey, Richard; Prod'homme, Thibaut; Cropper, Mark; Cordes, Oliver; Gow, Jason; Kohley, Ralf; Marggraf, Ole; Niemi, Sami; Rhodes, Jason; Short, Alex; Verhoeve, Peter
2015-10-01
Radiation damage to space-based charge-coupled device detectors creates defects which result in an increasing charge transfer inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect - and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrades measurements of photometry and morphology. As a concrete application, we simulate 1.5 × 109 `worst-case' galaxy and 1.5 × 108 star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges. If the model used to correct CTI is perfectly the same as that used to add CTI, 99.68 per cent of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets overcorrected during correction. Secondly, if we assume the first issue to be solved, knowledge of the charge trap density within Δρ/ρ = (0.0272 ± 0.0005) per cent and the characteristic release time of the dominant species to be known within Δτ/τ = (0.0400 ± 0.0004) per cent will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.
A B-spline Galerkin method for the Dirac equation
NASA Astrophysics Data System (ADS)
Froese Fischer, Charlotte; Zatsarinny, Oleg
2009-06-01
The B-spline Galerkin method is first investigated for the simple eigenvalue problem, y=-λy, that can also be written as a pair of first-order equations y=λz, z=-λy. Expanding both y(r) and z(r) in the B basis results in many spurious solutions such as those observed for the Dirac equation. However, when y(r) is expanded in the B basis and z(r) in the dB/dr basis, solutions of the well-behaved second-order differential equation are obtained. From this analysis, we propose a stable method ( B,B) basis for the Dirac equation and evaluate its accuracy by comparing the computed and exact R-matrix for a wide range of nuclear charges Z and angular quantum numbers κ. When splines of the same order are used, many spurious solutions are found whereas none are found for splines of different order. Excellent agreement is obtained for the R-matrix and energies for bound states for low values of Z. For high Z, accuracy requires the use of a grid with many points near the nucleus. We demonstrate the accuracy of the bound-state wavefunctions by comparing integrals arising in hyperfine interaction matrix elements with exact analytic expressions. We also show that the Thomas-Reiche-Kuhn sum rule is not a good measure of the quality of the solutions obtained by the B-spline Galerkin method whereas the R-matrix is very sensitive to the appearance of pseudo-states.
Children Learn Spurious Associations in Their Math Textbooks: Examples from Fraction Arithmetic
ERIC Educational Resources Information Center
Braithwaite, David W.; Siegler, Robert S.
2018-01-01
Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge--rather than understanding of mathematical concepts and procedures--to…
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Drugs and Crime: An Empirically Based, Interdisciplinary Model
ERIC Educational Resources Information Center
Quinn, James F.; Sneed, Zach
2008-01-01
This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... operation. Curves or equivalent data shall be supplied showing the magnitude of each harmonic and other.... For equipment operating on frequencies below 890 MHz, an open field test is normally required, with... either impractical or impossible to make open field measurements (e.g. a broadcast transmitter installed...
In traditional watershed delineation and topographic modelling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In r...
Spurious Latent Classes in the Mixture Rasch Model
ERIC Educational Resources Information Center
Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.
2011-01-01
Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…
Use of Inappropriate and Inaccurate Conceptual Knowledge to Solve an Osmosis Problem.
ERIC Educational Resources Information Center
Zuckerman, June Trop
1995-01-01
Presents correct solutions to an osmosis problem of two high school science students who relied on inaccurate and inappropriate conceptual knowledge. Identifies characteristics of the problem solvers, salient properties of the problem that could contribute to the problem misrepresentation, and spurious correct answers. (27 references) (Author/MKR)
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... For equipment operating on frequencies below 890 MHz, an open field test is normally required, with... either impractical or impossible to make open field measurements (e.g. a broadcast transmitter installed... 47 Telecommunication 1 2010-10-01 2010-10-01 false Measurements required: Field strength of...
ERIC Educational Resources Information Center
Dodge, Tonya; Jaccard, James
2002-01-01
Compared sexual risk behavior of female athletes and nonathletes. Examined mediation, reverse mediation, spurious effects, and moderated causal models, using as potential mediators physical development, educational aspirations, self-esteem, attitudes toward pregnancy, involvement in a romantic relationship, age, ethnicity, and social class. Found…
Confronting Science: The Dilemma of Genetic Testing.
ERIC Educational Resources Information Center
Zallen, Doris T.
1997-01-01
Considers the opportunities and ethical issues involved in genetic testing. Reviews the history of genetics from the first discoveries of Gregor Mendel, through the spurious pseudo-science of eugenics, and up to the discovery of DNA by James Watson and Francis Crick. Explains how genetic tests are done. (MJP)
NASA Technical Reports Server (NTRS)
Kester, DO; Bontekoe, Tj. Romke
1994-01-01
In order to make the best high resolution images of IRAS data it is necessary to incorporate any knowledge about the instrument into a model: the IRAS model. This is necessary since every remaining systematic effect will be amplified by any high resolution technique into spurious artifacts in the images. The search for random noise is in fact the never-ending quest for better quality results, and can only be obtained by better models. The Dutch high-resolution effort has resulted in HIRAS which drives the MEMSYS5 algorithm. It is specifically designed for IRAS image construction. A detailed description of HIRAS with many results is in preparation. In this paper we emphasize many of the instrumental effects incorporated in the IRAS model, including our improved 100 micron IRAS response functions.
IRIS Mariner 9 Data Revisited. 1; An Instrumental Effect
NASA Technical Reports Server (NTRS)
Formisano, V.; Grassi, D.; Piccioni, G.; Pearl, John; Bjoraker, G.; Conrath, B.; Hanel, R.
1999-01-01
Small spurious features are present in data from the Mariner 9 Infrared Interferometer Spectrometer (IRIS). These represent a low amplitude replication of the spectrum with a doubled wavenumber scale. This replication arises principally from an internal reflection of the interferogram at the input window. An algorithm is provided to correct for the effect, which is at the 2% level. We believe that the small error in the uncorrected spectra does not materially affect previous results; however, it may be significant for some future studies at short wavelengths. The IRIS spectra are also affected by a coding error in the original calibration that results in only positive radiances. This reduces the effectiveness of averaging spectra to improve the signal to noise ratio at small signal levels.
An optical mm-wave generation scheme by frequency octupling using a nested MMI
NASA Astrophysics Data System (ADS)
Shang, Lei; Wen, Aijun; Li, Bo; Wang, Tonggang; Chen, Yang; Li, Ming'an
2011-12-01
A novel method of a filterless optical millimeter-wave (MMW) signal generation with frequency octupling via a nested multimode interference (MMI) coupler is proposed for Radio-over-fiber systems. By setting the DC bias voltage applied to the central arms of MMI-b and MMI-c accurately, the optical carrier can be completely suppressed. The OSSR can be as high as about 58 dB without optical filter and the radio frequency spurious suppression ratio (RFSSR) exceeds 32 dB, which is the best result as we know. Simulation results suggest that when the generated optical mm-wave signal is transmitted along the standard single-mode fiber, the eye diagram is still opened after being transmitted over a 50 km fiber.
Auto- and hetero-associative memory using a 2-D optical logic gate
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1989-01-01
An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.
Auto- and hetero-associative memory using a 2-D optical logic gate
NASA Astrophysics Data System (ADS)
Chao, Tien-Hsin
1989-06-01
An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.
NASA Technical Reports Server (NTRS)
Tai, Hsiang
2006-01-01
In a typical optic fiber Bragg grating (FBG) strain measurement, unless in an ideal static laboratory environment, the presence of vibration or often disturbance always exists, which often creates spurious multiple peaks in the reflected spectrum, resulting in a non-unique determination of strain value. In this report we attempt to investigate the origin of this phenomenon by physical arguments and simple numerical simulation. We postulate that the fiber gratings execute small amplitude transverse vibrations changing the optical path in which the reflected light traverses slightly and non-uniformly. Ultimately, this causes the multi-peak reflected spectrum.
Quantum pattern recognition with multi-neuron interactions
NASA Astrophysics Data System (ADS)
Fard, E. Rezaei; Aghayar, K.; Amniat-Talab, M.
2018-03-01
We present a quantum neural network with multi-neuron interactions for pattern recognition tasks by a combination of extended classic Hopfield network and adiabatic quantum computation. This scheme can be used as an associative memory to retrieve partial patterns with any number of unknown bits. Also, we propose a preprocessing approach to classifying the pattern space S to suppress spurious patterns. The results of pattern clustering show that for pattern association, the number of weights (η ) should equal the numbers of unknown bits in the input pattern ( d). It is also remarkable that associative memory function depends on the location of unknown bits apart from the d and load parameter α.
Hypothesis testing in clinical trials.
Green, S B
2000-08-01
In designing and analyzing any clinical trial, two issues related to patient heterogeneity must be considered: (1) the effect of chance and (2) the effect of bias. These issues are addressed by enrolling adequate numbers of patients in the study and using randomization for treatment assignment. An "intention-to-treat" analysis of outcome data includes all individuals randomized and counted in the group to which they are randomized. There is an increased risk of spurious results with a greater number of subgroup analyses, particularly when these analyses are data derived. Factorial designs are sometimes appropriate and can lead to efficiencies by addressing more than one comparison of interventions in a single trial.
IUE observations of variability in winds from hot stars
NASA Technical Reports Server (NTRS)
Grady, C. A.; Snow, T. P., Jr.
1981-01-01
Observations of variability in stellar winds or envelopes provide an important probe of their dynamics. For this purpose a number of O, B, Be, and Wolf-Rayet stars were repeatedly observed with the IUE satellite in high resolution mode. In the course of analysis, instrumental and data handling effects were found to introduce spurious variability in many of the spectra. software was developed to partially compensate for these effects, but limitations remain on the type of variability that can be identified from IUE spectra. With these contraints, preliminary results of multiple observations of two OB stars, one Wolf-Rayet star, and a Be star are discussed.
Magnetic storm effects in electric power systems and prediction needs
NASA Technical Reports Server (NTRS)
Albertson, V. D.; Kappenman, J. G.
1979-01-01
Geomagnetic field fluctuations produce spurious currents in electric power systems. These currents enter and exit through points remote from each other. The fundamental period of these currents is on the order of several minutes which is quasi-dc compared to the normal 60 Hz or 50 Hz power system frequency. Nearly all of the power systems problems caused by the geomagnetically induced currents result from the half-cycle saturation of power transformers due to simultaneous ac and dc excitation. The effects produced in power systems are presented, current research activity is discussed, and magnetic storm prediction needs of the power industry are listed.
NASA Astrophysics Data System (ADS)
Chen, Yung-Yu; Huang, Li-Chung; Wang, Wei-Shan; Lin, Yu-Ching; Wu, Tsung-Tsong; Sun, Jia-Hong; Esashi, Masayoshi
2013-04-01
Acoustic interference suppression of quartz crystal microbalance (QCM) sensor arrays utilizing phononic crystals is investigated in this paper. A square-lattice phononic crystal structure is designed to have a complete band gap covering the QCM's resonance frequency. The monolithic sensor array consisting of two QCMs separated by phononic crystals is fabricated by micromachining processes. As a result, 12 rows of phononic crystals with band gap boost insertion loss between the two QCMs by 20 dB and also reduce spurious modes. Accordingly, the phononic crystal is verified to be capable of suppressing the acoustic interference between adjacent QCMs in a sensor array.
Science, pseudoscience, and the frontline practitioner: the vaccination/autism debate.
White, Erina
2014-01-01
This article demonstrates how misinformation concerning autism and vaccinations was created and suggests that social workers may be perfectly poised to challenge pseudoscience interpretations. Utilizing social network theory, this article illustrates how erroneous research, mass media, and public opinion led to a decreased use of vaccinations in the United States and a seven-fold increase in measles outbreaks. It traces the dissemination of spurious research results and demonstrates how information was transmitted via a system of social network nodes and community ties. This article encourages social workers, as frontline knowledge brokers, to counter misinformation, which may lead to significant public health consequences.
Estimating the return on investment in disease management programs using a pre-post analysis.
Fetterolf, Donald; Wennberg, David; Devries, Andrea
2004-01-01
Disease management programs have become increasingly popular over the past 5-10 years. Recent increases in overall medical costs have precipitated new concerns about the cost-effectiveness of medical management programs that have extended to the program directors for these programs. Initial success of the disease management movement is being challenged on the grounds that reported results have been the result of the application of faulty, if intuitive, methodologies. This paper discusses the use of "pre-post" methodology approaches in the analysis of disease management programs, and areas where application of this approach can result in spurious results and incorrect financial outcome assessments. The paper includes a checklist of these items for use by operational staff working with the programs, and a comprehensive bibliography that addresses many of the issues discussed.
NASA Technical Reports Server (NTRS)
2005-01-01
A new all-electronic Particle Image Velocimetry technique that can efficiently map high speed gas flows has been developed in-house at the NASA Lewis Research Center. Particle Image Velocimetry is an optical technique for measuring the instantaneous two component velocity field across a planar region of a seeded flow field. A pulsed laser light sheet is used to illuminate the seed particles entrained in the flow field at two instances in time. One or more charged coupled device (CCD) cameras can be used to record the instantaneous positions of particles. Using the time between light sheet pulses and determining either the individual particle displacements or the average displacement of particles over a small subregion of the recorded image enables the calculation of the fluid velocity. Fuzzy logic minimizes the required operator intervention in identifying particles and computing velocity. Using two cameras that have the same view of the illumination plane yields two single exposure image frames. Two competing techniques that yield unambiguous velocity vector direction information have been widely used for reducing the single-exposure, multiple image frame data: (1) cross-correlation and (2) particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. For the correlation technique, the correlation peak corresponding to the average displacement of particles across the subregion must be identified. Noise on the images and particle dropout result in misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work, fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements, and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle-tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus the velocity. Combining these two techniques makes use of the higher spatial resolution available from the particle tracking. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two-staged velocimetric technique can measure particle velocities with high spatial resolution over a broad range of seeding densities.
ERIC Educational Resources Information Center
Gartrell, John; Marquez, Stephanie Amadeo
1995-01-01
Criticizes data analysis and interpretation in "The Bell Curve:" Herrnstein and Murray do not actually study the "cognitive elite"; do not control for education when examining effects of cognitive ability on occupational outcomes, ignore, cultural diversity within broad ethnic groups (Asian Americans, Latinos), ignore gender…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lauermann, M.; Weimann, C.; Palmer, R.
2014-05-27
We demonstrate a waveguide-based frequency shifter on the silicon photonic platform, enabling frequency shifts up to 10 GHz. The device is realized by silicon-organic hybrid (SOH) integration. Temporal shaping of the drive signal allows the suppression of spurious side-modes by more than 23 dB.
Infant Learning Is Influenced by Local Spurious Generalizations
ERIC Educational Resources Information Center
Gerken, LouAnn; Quam, Carolyn
2017-01-01
In previous work, 11-month-old infants were able to learn rules about the relation of the consonants in CVCV words from just four examples. The rules involved phonetic feature relations (same voicing or same place of articulation), and infants' learning was impeded when pairs of words allowed alternative possible generalizations (e.g. two words…
Republication of "A Simple--But Powerful--Power Simulation"
ERIC Educational Resources Information Center
Bolman, Lee; Deal, Terrence E.
2017-01-01
The authors write that the longer they study and work in organizations, the more they discover power to be one of the central issues which researchers and students must understand. Researchers who ignore power run the risk of spurious, irrelevant findings. Students who assume administrative positions without a proper understanding of power and how…
Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.
ERIC Educational Resources Information Center
Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman
1998-01-01
Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…
ERIC Educational Resources Information Center
Bauer, Daniel J.; Curran, Patrick J.
2004-01-01
Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…
The Seven Deadly Sins of World University Ranking: A Summary from Several Papers
ERIC Educational Resources Information Center
Soh, Kaycheng
2017-01-01
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
Minimizing bias in biomass allometry: Model selection and log transformation of data
Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer
2011-01-01
Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....
Channel One Online: Advertising Not Educating.
ERIC Educational Resources Information Center
Pasnik, Shelley
Rather than viewing Channel One's World Wide Web site as an authentic news bureau, as the organization claims, it is better understood as an advertising delivery system. The web site is an attempt to expand Channel One's reach into schools, taking advantage of unsuspecting teachers and students who might fall prey to spurious claims. This paper…
Should Children Have Best Friends?
ERIC Educational Resources Information Center
Healy, Mary
2017-01-01
An important theme in the philosophy of education community in recent years has been the way in which philosophy can be brought to illuminate and evaluate research findings from the landscape of policy and practice. Undoubtedly, some of these practices can be based on spurious evidence, yet have mostly been left unchallenged in both philosophical…
Retaining through Training Even for Older Workers
ERIC Educational Resources Information Center
Picchio, Matteo; van Ours, Jan C.
2013-01-01
This paper investigates whether on-the-job training has an effect on the employability of workers. Using data from the Netherlands we disentangle the true effect of training incidence from the spurious one determined by unobserved individual heterogeneity. We also take into account that there might be feedback from shocks in the employment status…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creutz, Michael
Using the Sigma model to explore the lowest order pseudo-scalar spectrum with SU(3) breaking, this talk considers an additional exact "taste" symmetry to mimic species doubling. Rooting replicas of a valid approach such as Wilson fermions reproduces the desired physical spectrum. In contrast, extra symmetries of the rooted staggered approach leave spurious states and a flavor dependent taste multiplicity.
`Unlearning' has a stabilizing effect in collective memories
NASA Astrophysics Data System (ADS)
Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.
1983-07-01
Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.
Xiang, Baoqiang; Zhao, Ming; Held, Isaac M.; ...
2017-02-13
The severity of the double Intertropical Convergence Zone (DI) problem in climate models can be measured by a tropical precipitation asymmetry index (PAI), indicating whether tropical precipitation favors the Northern Hemisphere or the Southern Hemisphere. Examination of 19 Coupled Model Intercomparison Project phase 5 models reveals that the PAI is tightly linked to the tropical sea surface temperature (SST) bias. As one of the factors determining the SST bias, the asymmetry of tropical net surface heat flux in Atmospheric Model Intercomparison Project (AMIP) simulations is identified as a skillful predictor of the PAI change from an AMIP to a coupledmore » simulation, with an intermodel correlation of 0.90. Using tropical top-of-atmosphere (TOA) fluxes, the correlations are lower but still strong. However, the extratropical asymmetries of surface and TOA fluxes in AMIP simulations cannot serve as useful predictors of the PAI change. Furthermore, this study suggests that the largest source of the DI bias is from the tropics and from atmospheric models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Deyu
A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less
Testing Gene-Gene Interactions in the Case-Parents Design
Yu, Zhaoxia
2011-01-01
The case-parents design has been widely used to detect genetic associations as it can prevent spurious association that could occur in population-based designs. When examining the effect of an individual genetic locus on a disease, logistic regressions developed by conditioning on parental genotypes provide complete protection from spurious association caused by population stratification. However, when testing gene-gene interactions, it is unknown whether conditional logistic regressions are still robust. Here we evaluate the robustness and efficiency of several gene-gene interaction tests that are derived from conditional logistic regressions. We found that in the presence of SNP genotype correlation due to population stratification or linkage disequilibrium, tests with incorrectly specified main-genetic-effect models can lead to inflated type I error rates. We also found that a test with fully flexible main genetic effects always maintains correct test size and its robustness can be achieved with negligible sacrifice of its power. When testing gene-gene interactions is the focus, the test allowing fully flexible main effects is recommended to be used. PMID:21778736
Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang
In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1984-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathode of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the Earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1986-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathodes of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants
NASA Astrophysics Data System (ADS)
Bernhoff, Niclas
2018-05-01
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.
To cut or not to cut? Assessing the modular structure of brain networks.
Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M
2014-05-01
A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.
Predation and the evolution of complex oviposition behaviour in Amazon rainforest frogs.
Magnusson, William E; Hero, Jean-Marc
1991-05-01
Terrestrial oviposition with free-living aquatic larvae is a common reproductive mode used by amphibians within the central Amazonian rainforest. We investigated the factors presently associated with diversity of microhabitats (waterbodies) that may be maintaining the diversity of reproductive modes. In particular, desiccation, predation by fish, competition with other anurans and water quality were examined in 11 waterbodies as possible forces leading to the evolution of terrestrial oviposition. Predation experiments demonstrated that fish generally do not eat anuran eggs, and that predacious tadpoles and dytiscid beetle larvae are voracious predators of anuran eggs. The percentage of species with terrestrial oviposition was only weakly correlated with the occurrence of pond drying, pH and oxygen concentration, suggesting that anurans in this tropical community are able to use the range of water quality available for egg development. There was a tendency for terrestrial oviposition to be associated with the number of species of tadpoles using the waterbody, but we consider this to be spurious as there was no obvious competitive mechanism that could result in this relationship. The percentage of species with terrestrial oviposition was significantly positively related to our index of egg predation pressure, and negatively related to our index of fish biomass. Egg predation pressure was also negatively related to the index of fish biomass. These results allow us to discount as improbable the hypothesis that predation by fish on anuran eggs was an important selective pressure leading to terrestrial oviposition in this community. The strong positive relationship between terrestrial oviposition and our index of egg predation pressure indicates that these predators have exerted, and are exerting, a significant selective pressure for terrestrial oviposition. The strong negative relationship between the occurrence of fish and the egg predators suggests the surprising conclusion that the presence of fish actually protects aquatic anuran eggs from predation in this tropical system, and allows aquatic oviposition to dominate only in those waterbodies with moderate to high densities of fish. Our results suggest that terrestrial oviposition is a "fixed predator avoidance" trait.
Wang, Ju; McClean, Phillip E; Lee, Rian; Goos, R Jay; Helms, Ted
2008-04-01
Association mapping is an alternative to mapping in a biparental population. A key to successful association mapping is to avoid spurious associations by controlling for population structure. Confirming the marker/trait association in an independent population is necessary for the implementation of the marker in other genetic studies. Two independent soybean populations consisting of advanced breeding lines representing the diversity within maturity groups 00, 0, and I were screened in multi-site, replicated field trials to discover molecular markers associated with iron deficiency chlorosis (IDC), a major yield-limiting factor in soybean. Lines with extreme phenotypes were initially screened to identify simple sequence repeat (SSR) markers putatively associated with the IDC. Marker data collected from all lines were used to control for population structure and kinship relationships. Single factor analysis of variance (SFA) and mixed linear model (MLM) analyses were used to discover marker/trait associations. The MLM analyses, which include population structure, kinship or both factors, reduced the number of markers significantly associated with IDC by 50% compared with SFA. With the MLM approach, three markers were found to be associated with IDC in the first population. Two of these markers, Satt114 and Satt239, were also found to be associated with IDC in the second confirmation population. For both populations, those lines with the tolerance allele at both these two marker loci had significantly lower IDC scores than lines with one or no tolerant alleles.
The bleeding time may be longer in children than in adults.
Sanders, J M; Holtkamp, C A; Buchanan, G R
1990-01-01
The bleeding time, the most frequently performed test reflecting in vivo platelet function, is the duration of blood flow from a standardized incision on the volar surface of the forearm. Normal values have been determined in adult subjects, but with the exception of neonates, data on the range of bleeding time values in pediatric patients are unavailable. Standard hematology textbooks imply that bleeding time values in children are similar to those of adults. We have reviewed our 9 years of experience with 137 children (mean age 6.5 years) who were referred for diagnostic evaluation of a bleeding disorder but whose history and physical examination were felt by us to be inconsistent with an abnormality of hemostasis. Bleeding time values in these individuals (mean 6.0 min, 95th percentile 9.0 min) were compared with those of 85 normal adult volunteers (mean 4.4 min, 95th percentile 6.5 min). The Simplate-I disposable device and vertical (perpendicular to elbow crease) incision direction were used in both groups. This difference between the pediatric and adult bleeding time values is statistically significant (p less than 0.0001). Neither age nor sex had a significant effect on the pediatric bleeding time measurements. We conclude that the bleeding time, when performed as described, is longer in children than in adults and that pediatric standards for bleeding time should be used in order to avoid a spurious diagnosis of a primary hemostatic disorder in some normal children.
Development of a Large Scale, High Speed Wheel Test Facility
NASA Technical Reports Server (NTRS)
Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc
1996-01-01
Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.
On the dynamics of a shock-bubble interaction
NASA Technical Reports Server (NTRS)
Quirk, James J.; Karni, Smadar
1994-01-01
We present a detailed numerical study of the interaction of a weak shock wave with an isolated cylindrical gas inhomogenity. Such interactions have been studied experimentally in an attempt to elucidate the mechanisms whereby shock waves propagating through random media enhance mixing. Our study concentrates on the early phases of the interaction process which are dominated by repeated refractions of acoustic fronts at the bubble interface. Specifically, we have reproduced two of the experiments performed by Haas and Sturtevant : M(sub s) = 1.22 planar shock wave, moving through air, impinges on a cylindrical bubble which contains either helium or Refrigerant 22. These flows are modelled using the two-dimensional, compressible Euler equations for a two component fluid (air-helium or air-Refrigerant 22). Although simulations of shock wave phenomena are now fairly commonplace, they are mostly restricted to single component flows. Unfortunately, multi-component extensions of successful single component schemes often suffer from spurious oscillations which are generated at material interfaces. Here we avoid such problems by employing a novel, nonconservative shock-capturing scheme. In addition, we have utilized a sophisticated adaptive mesh refinement algorithm which enables extremely high resolution simulations to be performed relatively cheaply. Thus we have been able to reproduce numerically all the intricate mechanisms that were observed experimentally (e.g., transitions from regular to irregular refraction, cusp formation and shock wave focusing, multi-shock and Mach shock structures, jet formation, etc.), and we can now present an updated description for the dynamics of a shock-bubble interaction.
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Atmospheric Dispersion Effects in Weak Lensing Measurements
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less
Optoelectronic oscillator with improved phase noise and frequency stability
NASA Astrophysics Data System (ADS)
Eliyahu, Danny; Sariri, Kouros; Taylor, Joseph; Maleki, Lute
2003-07-01
In this paper we report on recent improvements in phase noise and frequency stability of a 10 GHz opto-electronic oscillator. In our OEO loop, the high Q elements (the optical fiber and the narrow bandpass microwave filter) are thermally stabilized using resistive heaters and temperature controllers, keeping their temperature above ambient. The thermally stabilized free running OEO demonstrates a short-term frequency stability of 0.02 ppm (over several hours) and frequency vs. temperature slope of -0.1 ppm/°C (compared to -8.3 ppm/°C for non thermally stabilized OEO). We obtained an exceptional spectral purity with phase noise level of -143 dBc/Hz at 10 kHz of offset frequency. We also describe the multi-loop configuration that reduces dramatically the spurious level at offset frequencies related to the loop round trip harmonic frequency. The multi-loop configuration has stronger mode selectivity due to interference between signals having different cavity lengths. A drop of the spurious level below -90 dBc was demonstrated. The effect of the oscillator aging on the frequency stability was studied as well by recording the oscillator frequency (in a chamber) over several weeks. We observed reversal in aging direction with logarithmic behavior of A ln(B t+1)-C ln(D t+1), where t is the time and A, B, C, D are constants. Initially, in the first several days, the positive aging dominates. However, later the negative aging mechanism dominates. We have concluded that the long-term aging behavioral model is consistent with the experimental results.
Quantifying asymmetry: ratios and alternatives.
Franks, Erin M; Cabo, Luis L
2014-08-01
Traditionally, the study of metric skeletal asymmetry has relied largely on univariate analyses, utilizing ratio transformations when the goal is comparing asymmetries in skeletal elements or populations of dissimilar dimensions. Under this approach, raw asymmetries are divided by a size marker, such as a bilateral average, in an attempt to produce size-free asymmetry indices. Henceforth, this will be referred to as "controlling for size" (see Smith: Curr Anthropol 46 (2005) 249-273). Ratios obtained in this manner often require further transformations to interpret the meaning and sources of asymmetry. This model frequently ignores the fundamental assumption of ratios: the relationship between the variables entered in the ratio must be isometric. Violations of this assumption can obscure existing asymmetries and render spurious results. In this study, we examined the performance of the classic indices in detecting and portraying the asymmetry patterns in four human appendicular bones and explored potential methodological alternatives. Examination of the ratio model revealed that it does not fulfill its intended goals in the bones examined, as the numerator and denominator are independent in all cases. The ratios also introduced strong biases in the comparisons between different elements and variables, generating spurious asymmetry patterns. Multivariate analyses strongly suggest that any transformation to control for overall size or variable range must be conducted before, rather than after, calculating the asymmetries. A combination of exploratory multivariate techniques, such as Principal Components Analysis, and confirmatory linear methods, such as regression and analysis of covariance, appear as a promising and powerful alternative to the use of ratios. © 2014 Wiley Periodicals, Inc.
Low validity of Google Trends for behavioral forecasting of national suicide rates.
Tran, Ulrich S; Andel, Rita; Niederkrotenthaler, Thomas; Till, Benedikt; Ajdacic-Gross, Vladeta; Voracek, Martin
2017-01-01
Recent research suggests that search volumes of the most popular search engine worldwide, Google, provided via Google Trends, could be associated with national suicide rates in the USA, UK, and some Asian countries. However, search volumes have mostly been studied in an ad hoc fashion, without controls for spurious associations. This study evaluated the validity and utility of Google Trends search volumes for behavioral forecasting of suicide rates in the USA, Germany, Austria, and Switzerland. Suicide-related search terms were systematically collected and respective Google Trends search volumes evaluated for availability. Time spans covered 2004 to 2010 (USA, Switzerland) and 2004 to 2012 (Germany, Austria). Temporal associations of search volumes and suicide rates were investigated with time-series analyses that rigorously controlled for spurious associations. The number and reliability of analyzable search volume data increased with country size. Search volumes showed various temporal associations with suicide rates. However, associations differed both across and within countries and mostly followed no discernable patterns. The total number of significant associations roughly matched the number of expected Type I errors. These results suggest that the validity of Google Trends search volumes for behavioral forecasting of national suicide rates is low. The utility and validity of search volumes for the forecasting of suicide rates depend on two key assumptions ("the population that conducts searches consists mostly of individuals with suicidal ideation", "suicide-related search behavior is strongly linked with suicidal behavior"). We discuss strands of evidence that these two assumptions are likely not met. Implications for future research with Google Trends in the context of suicide research are also discussed.
Reducing orbital eccentricity of precessing black-hole binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buonanno, Alessandra; Taracchini, Andrea; Kidder, Lawrence E.
2011-05-15
Building initial conditions for generic binary black-hole evolutions which are not affected by initial spurious eccentricity remains a challenge for numerical-relativity simulations. This problem can be overcome by applying an eccentricity-removal procedure which consists of evolving the binary black hole for a couple of orbits, estimating the resulting eccentricity, and then restarting the simulation with corrected initial conditions. The presence of spins can complicate this procedure. As predicted by post-Newtonian theory, spin-spin interactions and precession prevent the binary from moving along an adiabatic sequence of spherical orbits, inducing oscillations in the radial separation and in the orbital frequency. For single-spinmore » binary black holes these oscillations are a direct consequence of monopole-quadrupole interactions. However, spin-induced oscillations occur at approximately twice the orbital frequency, and therefore can be distinguished and disentangled from the initial spurious eccentricity which occurs at approximately the orbital frequency. Taking this into account, we develop a new eccentricity-removal procedure based on the derivative of the orbital frequency and find that it is rather successful in reducing the eccentricity measured in the orbital frequency to values less than 10{sup -4} when moderate spins are present. We test this new procedure using numerical-relativity simulations of binary black holes with mass ratios 1.5 and 3, spin magnitude 0.5, and various spin orientations. The numerical simulations exhibit spin-induced oscillations in the dynamics at approximately twice the orbital frequency. Oscillations of similar frequency are also visible in the gravitational-wave phase and frequency of the dominant l=2, m=2 mode.« less
Genetic Structure of the Han Chinese Population Revealed by Genome-wide SNP Variation
Chen, Jieming; Zheng, Houfeng; Bei, Jin-Xin; Sun, Liangdan; Jia, Wei-hua; Li, Tao; Zhang, Furen; Seielstad, Mark; Zeng, Yi-Xin; Zhang, Xuejun; Liu, Jianjun
2009-01-01
Population stratification is a potential problem for genome-wide association studies (GWAS), confounding results and causing spurious associations. Hence, understanding how allele frequencies vary across geographic regions or among subpopulations is an important prelude to analyzing GWAS data. Using over 350,000 genome-wide autosomal SNPs in over 6000 Han Chinese samples from ten provinces of China, our study revealed a one-dimensional “north-south” population structure and a close correlation between geography and the genetic structure of the Han Chinese. The north-south population structure is consistent with the historical migration pattern of the Han Chinese population. Metropolitan cities in China were, however, more diffused “outliers,” probably because of the impact of modern migration of peoples. At a very local scale within the Guangdong province, we observed evidence of population structure among dialect groups, probably on account of endogamy within these dialects. Via simulation, we show that empirical levels of population structure observed across modern China can cause spurious associations in GWAS if not properly handled. In the Han Chinese, geographic matching is a good proxy for genetic matching, particularly in validation and candidate-gene studies in which population stratification cannot be directly accessed and accounted for because of the lack of genome-wide data, with the exception of the metropolitan cities, where geographical location is no longer a good indicator of ancestral origin. Our findings are important for designing GWAS in the Chinese population, an activity that is expected to intensify greatly in the near future. PMID:19944401
A 6.1 s isomer and rotational bands in $sup 192$Os
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pakkanen, A.; Heikkinen, D.W.
A 6.1 plus or minus 0.2 s activity has been observed when natural Os targets were bombarded with 14.5 MeV neutrons. The activity is assigned to the decay of a high-spin isomer in /sup 192/Os at 2015.4 keV, which is depopulated by M2 and E3 transitions. Singles and coincidence gamma -ray spectra have allowed the identification of seven new states in /sup 192/Os. Several of these levels have been placed in either the ground- state or gamma -vibrational bands, which are strongly mixed. Excitation energies, B(E2) ratios for these bands are compared with different theoretical models. (auth) It is shownmore » that, if one uses single-particle energies from experiment and a delta residual interaction, it is not possible to obtain the energy of the giant dipole and spurious states of /sup 208/Pb, and at the same time obtain reasonable results for the low-lying two- particle spectra of /sup 210/Pb or /sup 210/Po. Related to the above problem, the isobaric analog state of /sup 208/Pb (in /sup 208/Bi) comes much too low in calculations using realistic interactions. It is noted that the above difficulties can be overcome, phenomenologically at least, by adding to the effective interaction some longrange repulsive components. The Bansal- French and the Schiffer interactions are examples of these; however, the dipole--dipole component of the Schiffer interaction gives much too large a splitting between the dipole state and spurious state. (auth)« less
NASA Astrophysics Data System (ADS)
Rausch, Kameron; Houchin, Scott; Cardema, Jason; Moy, Gabriel; Haas, Evan; De Luccia, Frank J.
2013-12-01
National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands are currently calibrated via weekly updates to look-up tables (LUTs) utilized by operational ground processing in the Joint Polar Satellite System Interface Data Processing Segment (IDPS). The parameters in these LUTs must be predicted ahead 2 weeks and cannot adequately track the dynamically varying response characteristics of the instrument. As a result, spurious "predict-ahead" calibration errors of the order of 0.1% or greater are routinely introduced into the calibrated reflectances and radiances produced by IDPS in sensor data records (SDRs). Spurious calibration errors of this magnitude adversely impact the quality of downstream environmental data records (EDRs) derived from VIIRS SDRs such as Ocean Color/Chlorophyll and cause increased striping and band-to-band radiometric calibration uncertainty of SDR products. A novel algorithm that fully automates reflective band calibration has been developed for implementation in IDPS in late 2013. Automating the reflective solar band (RSB) calibration is extremely challenging and represents a significant advancement over the manner in which RSB calibration has traditionally been performed in heritage instruments such as the Moderate Resolution Imaging Spectroradiometer. The automated algorithm applies calibration data almost immediately after their acquisition by the instrument from views of space and on-onboard calibration sources, thereby eliminating the predict-ahead errors associated with the current offline calibration process. This new algorithm, when implemented, will significantly improve the quality of VIIRS reflective band SDRs and consequently the quality of EDRs produced from these SDRs.