NASA Astrophysics Data System (ADS)
Mansfield, C. D.; Rutt, H. N.
2002-02-01
The possible generation of spurious results, arising from the application of infrared spectroscopic techniques to the measurement of carbon isotope ratios in breath, due to coincident absorption bands has been re-examined. An earlier investigation, which approached the problem qualitatively, fulfilled its aspirations in providing an unambiguous assurance that 13C16O2/12C16O2 ratios can be confidently measured for isotopic breath tests using instruments based on infrared absorption. Although this conclusion still stands, subsequent quantitative investigation has revealed an important exception that necessitates a strict adherence to sample collection protocol. The results show that concentrations and decay rates of the coincident breath trace compounds acetonitrile and carbon monoxide, found in the breath sample of a heavy smoker, can produce spurious results. Hence, findings from this investigation justify the concern that breath trace compounds present a risk to the accurate measurement of carbon isotope ratios in breath when using broadband, non-dispersive, ground state absorption infrared spectroscopy. It provides recommendations on the length of smoking abstention required to avoid generation of spurious results and also reaffirms, through quantitative argument, the validity of using infrared absorption spectroscopy to measure CO2 isotope ratios in breath.
Spurious symptom reduction in fault monitoring
NASA Technical Reports Server (NTRS)
Shontz, William D.; Records, Roger M.; Choi, Jai J.
1993-01-01
Previous work accomplished on NASA's Faultfinder concept suggested that the concept was jeopardized by spurious symptoms generated in the monitoring phase. The purpose of the present research was to investigate methods of reducing the generation of spurious symptoms during in-flight engine monitoring. Two approaches for reducing spurious symptoms were investigated. A knowledge base of rules was constructed to filter known spurious symptoms and a neural net was developed to improve the expectation values used in the monitoring process. Both approaches were effective in reducing spurious symptoms individually. However, the best results were obtained using a hybrid system combining the neural net capability to improve expectation values with the rule-based logic filter.
Direct Numerical Simulation of Low Capillary Number Pore Scale Flows
NASA Astrophysics Data System (ADS)
Esmaeilzadeh, S.; Soulaine, C.; Tchelepi, H.
2017-12-01
The arrangement of void spaces and the granular structure of a porous medium determines multiple macroscopic properties of the rock such as porosity, capillary pressure, and relative permeability. Therefore, it is important to study the microscopic structure of the reservoir pores and understand the dynamics of fluid displacements through them. One approach for doing this, is direct numerical simulation of pore-scale flow that requires a robust numerical tool for prediction of fluid dynamics and a detailed understanding of the physical processes occurring at the pore-scale. In pore scale flows with a low capillary number, Eulerian multiphase methods are well-known to produce additional vorticity close to the interface. This is mainly due to discretization errors which lead to an imbalance of capillary pressure and surface tension forces that causes unphysical spurious currents. At the pore scale, these spurious currents can become significantly stronger than the average velocity in the phases, and lead to unphysical displacement of the interface. In this work, we first investigate the capability of the algebraic Volume of Fluid (VOF) method in OpenFOAM for low capillary number pore scale flow simulations. Afterward, we compare VOF results with a Coupled Level-Set Volume of Fluid (CLSVOF) method and Iso-Advector method. It has been shown that the former one reduces the VOF's unphysical spurious currents in some cases, and both are known to capture interfaces sharper than VOF. As the conclusion, we will investigate that whether the use of CLSVOF or Iso-Advector will lead to less spurious velocities and more accurate results for capillary driven pore-scale multiphase flows or not. Keywords: Pore-scale multiphase flow, Capillary driven flows, Spurious currents, OpenFOAM
Shardell, Michelle; Harris, Anthony D; El-Kamary, Samer S; Furuno, Jon P; Miller, Ram R; Perencevich, Eli N
2007-10-01
Quasi-experimental study designs are frequently used to assess interventions that aim to limit the emergence of antimicrobial-resistant pathogens. However, previous studies using these designs have often used suboptimal statistical methods, which may result in researchers making spurious conclusions. Methods used to analyze quasi-experimental data include 2-group tests, regression analysis, and time-series analysis, and they all have specific assumptions, data requirements, strengths, and limitations. An example of a hospital-based intervention to reduce methicillin-resistant Staphylococcus aureus infection rates and reduce overall length of stay is used to explore these methods.
Identification of Spurious Signals from Permeable Ffowcs Williams and Hawkings Surfaces
NASA Technical Reports Server (NTRS)
Lopes, Leonard V.; Boyd, David D., Jr.; Nark, Douglas M.; Wiedemann, Karl E.
2017-01-01
Integral forms of the permeable surface formulation of the Ffowcs Williams and Hawkings (FW-H) equation often require an input in the form of a near field Computational Fluid Dynamics (CFD) solution to predict noise in the near or far field from various types of geometries. The FW-H equation involves three source terms; two surface terms (monopole and dipole) and a volume term (quadrupole). Many solutions to the FW-H equation, such as several of Farassat's formulations, neglect the quadrupole term. Neglecting the quadrupole term in permeable surface formulations leads to inaccuracies called spurious signals. This paper explores the concept of spurious signals, explains how they are generated by specifying the acoustic and hydrodynamic surface properties individually, and provides methods to determine their presence, regardless of whether a correction algorithm is employed. A potential approach based on the equivalent sources method (ESM) and the sensitivity of Formulation 1A (Formulation S1A) is also discussed for the removal of spurious signals.
Chou, Chia-Chun; Kouri, Donald J
2013-04-25
We show that there exist spurious states for the sector two tensor Hamiltonian in multidimensional supersymmetric quantum mechanics. For one-dimensional supersymmetric quantum mechanics on an infinite domain, the sector one and two Hamiltonians have identical spectra with the exception of the ground state of the sector one. For tensorial multidimensional supersymmetric quantum mechanics, there exist normalizable spurious states for the sector two Hamiltonian with energy equal to the ground state energy of the sector one. These spurious states are annihilated by the adjoint charge operator, and hence, they do not correspond to physical states for the original Hamiltonian. The Hermitian property of the sector two Hamiltonian implies the orthogonality between spurious and physical states. In addition, we develop a method for construction of a specific form of the spurious states for any quantum system and also generate several spurious states for a two-dimensional anharmonic oscillator system and for the hydrogen atom.
Spurious cross-frequency amplitude-amplitude coupling in nonstationary, nonlinear signals
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Lo, Men-Tzung; Hu, Kun
2016-07-01
Recent studies of brain activities show that cross-frequency coupling (CFC) plays an important role in memory and learning. Many measures have been proposed to investigate the CFC phenomenon, including the correlation between the amplitude envelopes of two brain waves at different frequencies - cross-frequency amplitude-amplitude coupling (AAC). In this short communication, we describe how nonstationary, nonlinear oscillatory signals may produce spurious cross-frequency AAC. Utilizing the empirical mode decomposition, we also propose a new method for assessment of AAC that can potentially reduce the effects of nonlinearity and nonstationarity and, thus, help to avoid the detection of artificial AACs. We compare the performances of this new method and the traditional Fourier-based AAC method. We also discuss the strategies to identify potential spurious AACs.
Attenuation of spurious responses in electromechanical filters
Olsson, Roy H.; Hietala, Vincent M.
2018-04-10
A spur cancelling, electromechanical filter includes a first resonator having a first resonant frequency and one or more first spurious responses, and it also includes, electrically connected to the first resonator, a second resonator having a second resonant frequency and one or more second spurious responses. The first and second resonant frequencies are approximately identical, but the first resonator is physically non-identical to the second resonator. The difference between the resonators makes the respective spurious responses different. This allows for filters constructed from a cascade of these resonators to exhibit reduced spurious responses.
NASA Technical Reports Server (NTRS)
Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)
2000-01-01
The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.
The origin of spurious solutions in computational electromagnetics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Wu, Jie; Povinelli, L. A.
1995-01-01
The origin of spurious solutions in computational electromagnetics, which violate the divergence equations, is deeply rooted in a misconception about the first-order Maxwell's equations and in an incorrect derivation and use of the curl-curl equations. The divergence equations must be always included in the first-order Maxwell's equations to maintain the ellipticity of the system in the space domain and to guarantee the uniqueness of the solution and/or the accuracy of the numerical solutions. The div-curl method and the least-squares method provide rigorous derivation of the equivalent second-order Maxwell's equations and their boundary conditions. The node-based least-squares finite element method (LSFEM) is recommended for solving the first-order full Maxwell equations directly. Examples of the numerical solutions by LSFEM for time-harmonic problems are given to demonstrate that the LSFEM is free of spurious solutions.
Beltman, Joost B; Urbanus, Jos; Velds, Arno; van Rooij, Nienke; Rohr, Jan C; Naik, Shalin H; Schumacher, Ton N
2016-04-02
Next generation sequencing (NGS) of amplified DNA is a powerful tool to describe genetic heterogeneity within cell populations that can both be used to investigate the clonal structure of cell populations and to perform genetic lineage tracing. For applications in which both abundant and rare sequences are biologically relevant, the relatively high error rate of NGS techniques complicates data analysis, as it is difficult to distinguish rare true sequences from spurious sequences that are generated by PCR or sequencing errors. This issue, for instance, applies to cellular barcoding strategies that aim to follow the amount and type of offspring of single cells, by supplying these with unique heritable DNA tags. Here, we use genetic barcoding data from the Illumina HiSeq platform to show that straightforward read threshold-based filtering of data is typically insufficient to filter out spurious barcodes. Importantly, we demonstrate that specific sequencing errors occur at an approximately constant rate across different samples that are sequenced in parallel. We exploit this observation by developing a novel approach to filter out spurious sequences. Application of our new method demonstrates its value in the identification of true sequences amongst spurious sequences in biological data sets.
Chen, I L; Chen, J T; Kuo, S R; Liang, M T
2001-03-01
Integral equation methods have been widely used to solve interior eigenproblems and exterior acoustic problems (radiation and scattering). It was recently found that the real-part boundary element method (BEM) for the interior problem results in spurious eigensolutions if the singular (UT) or the hypersingular (LM) equation is used alone. The real-part BEM results in spurious solutions for interior problems in a similar way that the singular integral equation (UT method) results in fictitious solutions for the exterior problem. To solve this problem, a Combined Helmholtz Exterior integral Equation Formulation method (CHEEF) is proposed. Based on the CHEEF method, the spurious solutions can be filtered out if additional constraints from the exterior points are chosen carefully. Finally, two examples for the eigensolutions of circular and rectangular cavities are considered. The optimum numbers and proper positions for selecting the points in the exterior domain are analytically studied. Also, numerical experiments were designed to verify the analytical results. It is worth pointing out that the nodal line of radiation mode of a circle can be rotated due to symmetry, while the nodal line of the rectangular is on a fixed position.
Kevin S. McKelvey; Eric C. Lofroth; Jeffrey P. Copeland; Keith B. Aubry; Audrey J. Magoun
2010-01-01
The recent paper by Brodie and Post ("Nonlinear responses of wolverine populations to declining winter snowpack", Popul Ecol 52:279-287, 2010) reports conclusions that are unsupportable, in our opinion, due to both mis-interpretations of current knowledge regarding the wolverine's (Gulo gulo) association with snow, and the uncritical use of harvest data...
Strength and Microstructure of Ceramics
1989-11-01
processing defects (pores or inclusions), etc. Theoretically, flaws have been represented as scaled-down versions of large cracks, so that the...no spurious reflections. confirming that the defects were not microtwins, From the TEM evidence. alhing with corresponding observations of fault...Lawn Vol. 71. No. I Interfaces. can be viewred as high-energy planar defects . AS Such. V. Conclusions they represent favored sites for microcrack
The consentaneous model of the financial markets exhibiting spurious nature of long-range memory
NASA Astrophysics Data System (ADS)
Gontis, V.; Kononovicius, A.
2018-09-01
It is widely accepted that there is strong persistence in the volatility of financial time series. The origin of the observed persistence, or long-range memory, is still an open problem as the observed phenomenon could be a spurious effect. Earlier we have proposed the consentaneous model of the financial markets based on the non-linear stochastic differential equations. The consentaneous model successfully reproduces empirical probability and power spectral densities of volatility. This approach is qualitatively different from models built using fractional Brownian motion. In this contribution we investigate burst and inter-burst duration statistics of volatility in the financial markets employing the consentaneous model. Our analysis provides an evidence that empirical statistical properties of burst and inter-burst duration can be explained by non-linear stochastic differential equations driving the volatility in the financial markets. This serves as an strong argument that long-range memory in finance can have spurious nature.
NASA Astrophysics Data System (ADS)
Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair
2017-11-01
We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.
A multidisciplinary approach to reducing spurious hyperkalemia in hospital outpatient clinics.
Loh, Tze Ping; Sethi, Sunil K
2015-10-01
To describe a multidisciplinary effort to investigate and reduce the occurence of outpatient spurious hyperkalaemia. Spurious hyperkalemia is a falsely elevated serum potassium result that does not reflect the in vivo condition of a person. A common practice of fist clenching/pumping during phlebotomy to improve vein visualisation is an under-appreciated cause of spurious hyperkalemia. Pre- and postinterventional study. Objective evidence of spurious hyperkalaemia was sought by reviewing archived laboratory results. A literature review was undertaken to summarise known causes of spurious hyperkalaemia and develop a best practice in phlebotomy. Subsequently, nurses from the Urology Clinic were interviewed, observed and surveyed to understand their phlebotomy workflow and identify potential areas of improvement by comparing to the best practice in phlebotomy. Unexplained (potentially spurious) hyperkalaemia was defined as a serum potassium of >5·0 mmol/l in a patient without stage 5 chronic kidney disease or haemolysed blood sample. Nurses from the Urology Clinic showed significant knowledge gap regarding causes of spurious hyperkalaemia when compared to the literature review. Direct observation revealed patients were routinely asked to clench their fists, which may cause spurious hyperkalaemia. Following these observations, several educational initiatives were administered to address the knowledge gap and stop fist clenching. The rate of unexplained hyperkalaemia at the Urology clinic reduced from a baseline of 16·0-3·8%, 58 weeks after intervention. Similar education intervention was propagated to all 18 other specialist outpatient clinic locations, which saw their rate of unexplained hyperkalaemia decrease from 5·4 to 3·7%. To ensure sustainability of the improvements, the existing phlebotomy standard operating protocol, educational and competency testing materials at variance with the best practice were revised. A simple intervention of avoiding fist clenching/pumping during phlebotomy produced significant reduction in the rate of spurious hyperkalemia. © 2015 John Wiley & Sons Ltd.
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Measurements required: Field strength of spurious radiation. 2.1053 Section 2.1053 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1053 Measurements required: Field strength of spurious radiation. (a...
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Measurements required: Field strength of spurious radiation. 2.1053 Section 2.1053 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1053 Measurements required: Field strength of spurious radiation. (a...
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Measurements required: Field strength of spurious radiation. 2.1053 Section 2.1053 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1053 Measurements required: Field strength of spurious radiation. (a...
Gene Unprediction with Spurio: A tool to identify spurious protein sequences.
Höps, Wolfram; Jeffryes, Matt; Bateman, Alex
2018-01-01
We now have access to the sequences of tens of millions of proteins. These protein sequences are essential for modern molecular biology and computational biology. The vast majority of protein sequences are derived from gene prediction tools and have no experimental supporting evidence for their translation. Despite the increasing accuracy of gene prediction tools there likely exists a large number of spurious protein predictions in the sequence databases. We have developed the Spurio tool to help identify spurious protein predictions in prokaryotes. Spurio searches the query protein sequence against a prokaryotic nucleotide database using tblastn and identifies homologous sequences. The tblastn matches are used to score the query sequence's likelihood of being a spurious protein prediction using a Gaussian process model. The most informative feature is the appearance of stop codons within the presumed translation of homologous DNA sequences. Benchmarking shows that the Spurio tool is able to distinguish spurious from true proteins. However, transposon proteins are prone to be predicted as spurious because of the frequency of degraded homologs found in the DNA sequence databases. Our initial experiments suggest that less than 1% of the proteins in the UniProtKB sequence database are likely to be spurious and that Spurio is able to identify over 60 times more spurious proteins than the AntiFam resource. The Spurio software and source code is available under an MIT license at the following URL: https://bitbucket.org/bateman-group/spurio.
Spurious-Mode Control of Same-Phase Drive-Type Ultrasonic Motor
NASA Astrophysics Data System (ADS)
Aoyagi, Manabu; Watanabe, Hiroyuki; Tomikawa, Yoshiro; Takano, Takehiro
2002-05-01
A same-phase drive-type ultrasonic motor requires a single power source for its operation. In particular, self-oscillation driving is useful for driving a small ultrasonic motor. This type of ultrasonic motor has a spurious mode close to the operation frequency on its stator vibrator. The spurious vibration mode affects the oscillation frequency of a self-oscillation drive circuit. Hence the spurious vibration mode should be restrained or moved away from the neighborhood of the operation frequency. In this paper, we report that an inductor connected at an electrical control terminal provided on standby electrodes for the reverse rotation operation controls only the spurious vibration mode. The effect of an inductor connected at the control terminal was clarified by the simulation of an equivalent circuit and some experiments.
Spurious states in boson calculations — spectre or reality?
NASA Astrophysics Data System (ADS)
Navrátil, P.; Geyer, H. B.; Dobeš, J.; Dobaczewski, J.
1994-03-01
We discuss some prevailing misconceptions about the possibility that spurious states may in general contaminate boson calculations of fermion systems on either the phenomenological or microscopic level. Amongst other things we point out that the possible appearance of spurious states is not inherently a mapping problem, but rather linked to a choice of basis in the boson Fock space. This choice is mostly dictated by convenience or the aim to make direct contact with phenomenology. Furthermore, neither well established collectivity, nor the construction of boson operators in the Marumori or OAI fashion can as such serve as a guarantee against the appearance of spurious boson states. Within an SO(12) generalisation of the Ginocchio model where collective decoupling is complete, we illustrate how spurious states may appear in an IBM-type sdg-boson analysis. We also show how these states may be identified on the boson level. This enables us to present an example of an sdg-spectrum which, although it may be reasonably correlated with experimental data, nevertheless has the first few low lying states all spurious when interpreted from the microscopic point of view. We briefly speculate about the possibility that certain types of truncation may in fact automatically circumvent the appearance of spurious states.
Second ROSAT all-sky survey (2RXS) source catalogue
NASA Astrophysics Data System (ADS)
Boller, Th.; Freyberg, M. J.; Trümper, J.; Haberl, F.; Voges, W.; Nandra, K.
2016-04-01
Aims: We present the second ROSAT all-sky survey source catalogue, hereafter referred to as the 2RXS catalogue. This is the second publicly released ROSAT catalogue of point-like sources obtained from the ROSAT all-sky survey (RASS) observations performed with the position-sensitive proportional counter (PSPC) between June 1990 and August 1991, and is an extended and revised version of the bright and faint source catalogues. Methods: We used the latest version of the RASS processing to produce overlapping X-ray images of 6.4° × 6.4° sky regions. To create a source catalogue, a likelihood-based detection algorithm was applied to these, which accounts for the variable point-spread function (PSF) across the PSPC field of view. Improvements in the background determination compared to 1RXS were also implemented. X-ray control images showing the source and background extraction regions were generated, which were visually inspected. Simulations were performed to assess the spurious source content of the 2RXS catalogue. X-ray spectra and light curves were extracted for the 2RXS sources, with spectral and variability parameters derived from these products. Results: We obtained about 135 000 X-ray detections in the 0.1-2.4 keV energy band down to a likelihood threshold of 6.5, as adopted in the 1RXS faint source catalogue. Our simulations show that the expected spurious content of the catalogue is a strong function of detection likelihood, and the full catalogue is expected to contain about 30% spurious detections. A more conservative likelihood threshold of 9, on the other hand, yields about 71 000 detections with a 5% spurious fraction. We recommend thresholds appropriate to the scientific application. X-ray images and overlaid X-ray contour lines provide an additional user product to evaluate the detections visually, and we performed our own visual inspections to flag uncertain detections. Intra-day variability in the X-ray light curves was quantified based on the normalised excess variance and a maximum amplitude variability analysis. X-ray spectral fits were performed using three basic models, a power law, a thermal plasma emission model, and black-body emission. Thirty-two large extended regions with diffuse emission and embedded point sources were identified and excluded from the present analysis. Conclusions: The 2RXS catalogue provides the deepest and cleanest X-ray all-sky survey catalogue in advance of eROSITA. The catalogue is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A103
Correlating quantum decoherence and material defects in a Josephson qubit
NASA Astrophysics Data System (ADS)
Hite, D. A.; McDermott, R.; Simmonds, R. W.; Cooper, K. B.; Steffen, M.; Nam, S.; Pappas, D. P.; Martinis, J. M.
2004-03-01
Superconducting tunnel junction devices are promising candidates for constructing quantum bits (qubits) for quantum computation because of their inherently low dissipation and ease of scalability by microfabrication. Recently, the Josephson phase qubit has been characterized spectroscopically as having spurious microwave resonators that couple to the qubit and act as a dominant source of decoherence. While the origin of these spurious resonances remains unknown, experimental evidence points to the material system of the tunnel barrier. Here, we focus on our materials research aimed at elucidating and eliminating these spurious resonators. In particular, we have studied the use of high quality Al films epitaxially grown on Si(111) as the base electrode of the tunnel junction. During each step in the Al/AlOx/Al trilayer growth, we have investigated the structure in situ by AES, AED and LEED. While tunnel junctions fabricated with these epitaxial base electrodes prove to be of non-uniform oxide thickness and too thin, I-V characteristics have shown a lowering of subgap currents by a factor of two. Transport measurements will be correlated with morphological structure for a number of devices fabricated with various degrees of crystalline quality.
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less
Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference
Simpson, James E.
1999-01-01
An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp.
Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference
Simpson, J.E.
1999-06-08
An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp. 18 figs.
Oscillations in Spurious States of the Associative Memory Model with Synaptic Depression
NASA Astrophysics Data System (ADS)
Murata, Shin; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato
2014-12-01
The associative memory model is a typical neural network model that can store discretely distributed fixed-point attractors as memory patterns. When the network stores the memory patterns extensively, however, the model has other attractors besides the memory patterns. These attractors are called spurious memories. Both spurious states and memory states are in equilibrium, so there is little difference between their dynamics. Recent physiological experiments have shown that the short-term dynamic synapse called synaptic depression decreases its efficacy of transmission to postsynaptic neurons according to the activities of presynaptic neurons. Previous studies revealed that synaptic depression destabilizes the memory states when the number of memory patterns is finite. However, it is very difficult to study the dynamical properties of the spurious states if the number of memory patterns is proportional to the number of neurons. We investigate the effect of synaptic depression on spurious states by Monte Carlo simulation. The results demonstrate that synaptic depression does not affect the memory states but mainly destabilizes the spurious states and induces periodic oscillations.
Pewarchuk, W; VanderBoom, J; Blajchman, M A
1992-01-01
A patient blood sample with an unexpectedly high hemoglobin level, high hematocrit, low white blood cell count, and low platelet count was recognized as being spurious based on previously available data. Repeated testing of the original sample showed a gradual return of all parameters to expected levels. We provide evidence that the overfilling of blood collection vacuum tubes can lead to inadequate sample mixing and that, in combination with the settling of the cellular contents in the collection tubes, can result in spuriously abnormal hematological parameters as estimated by an automated method.
Acousto-optics bandwidth broadening in a Bragg cell based on arbitrary synthesized signal methods.
Peled, Itay; Kaminsky, Ron; Kotler, Zvi
2015-06-01
In this work, we present the advantages of driving a multichannel acousto-optical deflector (AOD) with a digitally synthesized multifrequency RF signal. We demonstrate a significant bandwidth broadening of ∼40% by providing well-tuned phase control of the array transducers. Moreover, using a multifrequency, complex signal, we manage to suppress the harmonic deflections and return most of the spurious energy to the main beam. This method allows us to operate the AOD with more than an octave of bandwidth with negligible spurious energy going to the harmonic beams and a total bandwidth broadening of over 70%.
Spurious Solutions Of Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1992-01-01
Report utilizes nonlinear-dynamics approach to investigate possible sources of errors and slow convergence and non-convergence of steady-state numerical solutions when using time-dependent approach for problems containing nonlinear source terms. Emphasizes implications for development of algorithms in CFD and computational sciences in general. Main fundamental conclusion of study is that qualitative features of nonlinear differential equations cannot be adequately represented by finite-difference method and vice versa.
The spurious response of microwave photonic mixer
NASA Astrophysics Data System (ADS)
Xiao, Yongchuan; Zhong, Guoshun; Qu, Pengfei; Sun, Lijun
2018-02-01
Microwave photonic mixer is a potential solution for wideband information systems due to the ultra-wide operating bandwidth, high LO-to-RF isolation, the intrinsic immunity to electromagnetic interference, and the compatibility with exsiting microwave photonic transmission systems. The spurious response of microwave photonic mixer cascading in series a pair of Mach-Zehnder interferometric intensity modulators has been simulated and analyzed in this paper. The low order spurious products caused by the nonlinearity of modulators are non-negligible, and the proper IF frequency and accurate bias-controlling are of great importance to mitigate the impact of spurious products.
The behavior of quantization spectra as a function of signal-to-noise ratio
NASA Technical Reports Server (NTRS)
Flanagan, M. J.
1991-01-01
An expression for the spectrum of quantization error in a discrete-time system whose input is a sinusoid plus white Gaussian noise is derived. This quantization spectrum consists of two components: a white-noise floor and spurious harmonics. The dithering effect of the input Gaussian noise in both components of the spectrum is considered. Quantitative results in a discrete Fourier transform (DFT) example show the behavior of spurious harmonics as a function of the signal-to-noise ratio (SNR). These results have strong implications for digital reception and signal analysis systems. At low SNRs, spurious harmonics decay exponentially on a log-log scale, and the resulting spectrum is white. As the SNR increases, the spurious harmonics figure prominently in the output spectrum. A useful expression is given that roughly bounds the magnitude of a spurious harmonic as a function of the SNR.
Application of nomographs for analysis and prediction of receiver spurious response EMI
NASA Astrophysics Data System (ADS)
Heather, F. W.
1985-07-01
Spurious response EMI for the front end of a superheterodyne receiver follows a simple mathematic formula; however, the application of the formula to predict test frequencies produces more data than can be evaluated. An analysis technique has been developed to graphically depict all receiver spurious responses usig a nomograph and to permit selection of optimum test frequencies. The discussion includes the math model used to simulate a superheterodyne receiver, the implementation of the model in the computer program, the approach to test frequency selection, interpretation of the nomographs, analysis and prediction of receiver spurious response EMI from the nomographs, and application of the nomographs. In addition, figures are provided of sample applications. This EMI analysis and prediction technique greatly improves the Electromagnetic Compatibility (EMC) test engineer's ability to visualize the scope of receiver spurious response EMI testing and optimize test frequency selection.
Cancellation of spurious arrivals in Green's function extraction and the generalized optical theorem
Snieder, R.; Van Wijk, K.; Haney, M.; Calvert, R.
2008-01-01
The extraction of the Green's function by cross correlation of waves recorded at two receivers nowadays finds much application. We show that for an arbitrary small scatterer, the cross terms of scattered waves give an unphysical wave with an arrival time that is independent of the source position. This constitutes an apparent inconsistency because theory predicts that such spurious arrivals do not arise, after integration over a complete source aperture. This puzzling inconsistency can be resolved for an arbitrary scatterer by integrating the contribution of all sources in the stationary phase approximation to show that the stationary phase contributions to the source integral cancel the spurious arrival by virtue of the generalized optical theorem. This work constitutes an alternative derivation of this theorem. When the source aperture is incomplete, the spurious arrival is not canceled and could be misinterpreted to be part of the Green's function. We give an example of how spurious arrivals provide information about the medium complementary to that given by the direct and scattered waves; the spurious waves can thus potentially be used to better constrain the medium. ?? 2008 The American Physical Society.
Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations
Xiao, Heng; Endo, Satoshi; Wong, May; ...
2015-10-29
Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
Digitally Enhanced Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Shaddock, Daniel; Ware, Brent; Lay, Oliver; Dubovitsky, Serge
2010-01-01
Spurious interference limits the performance of many interferometric measurements. Digitally enhanced interferometry (DEI) improves measurement sensitivity by augmenting conventional heterodyne interferometry with pseudo-random noise (PRN) code phase modulation. DEI effectively changes the measurement problem from one of hardware (optics, electronics), which may deteriorate over time, to one of software (modulation, digital signal processing), which does not. DEI isolates interferometric signals based on their delay. Interferometric signals are effectively time-tagged by phase-modulating the laser source with a PRN code. DEI improves measurement sensitivity by exploiting the autocorrelation properties of the PRN to isolate only the signal of interest and reject spurious interference. The properties of the PRN code determine the degree of isolation.
Fleyer, Michael; Sherman, Alexander; Horowitz, Moshe; Namer, Moshe
2016-05-01
We experimentally demonstrate a wideband-frequency tunable optoelectronic oscillator (OEO) based on injection locking of the OEO to a tunable electronic oscillator. The OEO cavity does not contain a narrowband filter and its frequency can be tuned over a broad bandwidth of 1 GHz. The injection locking is based on minimizing the injected power by adjusting the frequency of one of the OEO cavity modes to be approximately equal to the frequency of the injected signal. The phase noise that is obtained in the injection-locked OEO is similar to that obtained in a long-cavity self-sustained OEO. Although the cavity length of the OEO was long, the spurious modes were suppressed due to the injection locking without the need to use a narrowband filter. The spurious level was significantly below that obtained in a self-sustained OEO after inserting a narrowband electronic filter with a Q-factor of 720 into the cavity.
NASA Astrophysics Data System (ADS)
Awai, Ikuo
A new comprehensive method to suppress the spurious modes in a BPF is proposed taking the multi-strip resonator BPF as an example. It consists of disturbing the resonant frequency, coupling coefficient and external Q of the higher-order modes at the same time. The designed example has shown an extraordinarily good out-of-band response in the computer simulation.
Rosenberg, Noah A; Nordborg, Magnus
2006-07-01
In linkage disequilibrium mapping of genetic variants causally associated with phenotypes, spurious associations can potentially be generated by any of a variety of types of population structure. However, mathematical theory of the production of spurious associations has largely been restricted to population structure models that involve the sampling of individuals from a collection of discrete subpopulations. Here, we introduce a general model of spurious association in structured populations, appropriate whether the population structure involves discrete groups, admixture among such groups, or continuous variation across space. Under the assumptions of the model, we find that a single common principle--applicable to both the discrete and admixed settings as well as to spatial populations--gives a necessary and sufficient condition for the occurrence of spurious associations. Using a mathematical connection between the discrete and admixed cases, we show that in admixed populations, spurious associations are less severe than in corresponding mixtures of discrete subpopulations, especially when the variance of admixture across individuals is small. This observation, together with the results of simulations that examine the relative influences of various model parameters, has important implications for the design and analysis of genetic association studies in structured populations.
Spurious RF signals emitted by mini-UAVs
NASA Astrophysics Data System (ADS)
Schleijpen, Ric (H. M. A.); Voogt, Vincent; Zwamborn, Peter; van den Oever, Jaap
2016-10-01
This paper presents experimental work on the detection of spurious RF emissions of mini Unmanned Aerial Vehicles (mini-UAV). Many recent events have shown that mini-UAVs can be considered as a potential threat for civil security. For this reason the detection of mini-UAVs has become of interest to the sensor community. The detection, classification and identification chain can take advantage of different sensor technologies. Apart from the signatures used by radar and electro-optical sensor systems, the UAV also emits RF signals. These RF signatures can be split in intentional signals for communication with the operator and un-intentional RF signals emitted by the UAV. These unintentional or spurious RF emissions are very weak but could be used to discriminate potential UAV detections from false alarms. The goal of this research was to assess the potential of exploiting spurious emissions in the classification and identification chain of mini-UAVs. It was already known that spurious signals are very weak, but the focus was on the question whether the emission pattern could be correlated to the behaviour of the UAV. In this paper experimental examples of spurious RF emission for different types of mini-UAVs and their correlation with the electronic circuits in the UAVs will be shown
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1990-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
Mooss, N S
1987-01-01
In basic Ayurveda texts, Susruta, Caraka and Vagbhata, some quite specific Salts (Lavanam) have been described and their properties and actions are enumerated. By comparing those accounts with the present methods of preparation, conclusions have been made and evidently spurious methods are pointed out. The reported properties of Saindhava, Samudra, Vida, Sauvarcha, Romaka, Audbhida, Gutika, the Katu Group, Krsna and Pamsuja Lavanas are discussed in terms of their chemical constituents here and, thus, the authors establish its inter-connections. PMID:22557573
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1991-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
Optoelectronic Terminal-Attractor-Based Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Barhen, Jacob; Farhat, Nabil H.
1994-01-01
Report presents theoretical and experimental study of optically and electronically addressable optical implementation of artificial neural network that performs associative recall. Shows by computer simulation that terminal-attractor-based associative memory can have perfect convergence in associative retrieval and increased storage capacity. Spurious states reduced by exploiting terminal attractors.
Analysis of spurious oscillation modes for the shallow water and Navier-Stokes equations
Walters, R.A.; Carey, G.F.
1983-01-01
The origin and nature of spurious oscillation modes that appear in mixed finite element methods are examined. In particular, the shallow water equations are considered and a modal analysis for the one-dimensional problem is developed. From the resulting dispersion relations we find that the spurious modes in elevation are associated with zero frequency and large wave number (wavelengths of the order of the nodal spacing) and consequently are zero-velocity modes. The spurious modal behavior is the result of the finite spatial discretization. By means of an artificial compressibility and limiting argument we are able to resolve the similar problem for the Navier-Stokes equations. The relationship of this simpler analysis to alternative consistency arguments is explained. This modal approach provides an explanation of the phenomenon in question and permits us to deduce the cause of the very complex behavior of spurious modes observed in numerical experiments with the shallow water equations and Navier-Stokes equations. Furthermore, this analysis is not limited to finite element formulations, but is also applicable to finite difference formulations. ?? 1983.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Lee, Jae; Iredell, Lena
2016-01-01
RCs of AIRS and MERRA-2 500 mb specific humidity agree very well in terms of spatial patterns, but MERRA-2 ARCs are larger in magnitude and show a spurious moistening globally and over Central Africa. AIRS and MERRA-2 fractional cloud cover ARCs agree less well with each other. MERRA-2 shows a spurious global mean increase in cloud cover that is not found in AIRS, including a large spurious cloud increase in Central Africa. AIRS and MERRA-2 ARCs of surface skin and surface air temperatures are all similar to each other in patterns. AIRS shows a small global warming over the 13 year period, while MERRA-2 shows a small global cooling. This difference results primarily from spurious MERRA-2 temperature trends at high latitudes and over Central Africa. These differences all contribute to the spurious negative global MERRA-2 OLR trend. AIRS Version-6 confirms that 2015 is the warmest year on record and that the Earth's surface is continuing to warm.
Power, Jonathan D; Barnes, Kelly A; Snyder, Abraham Z; Schlaggar, Bradley L; Petersen, Steven E
2011-01-01
Here, we demonstrate that subject motion produces substantial changes in the timecourses of resting state functional connectivity MRI (rs-fcMRI) data despite compensatory spatial registration and regression of motion estimates from the data. These changes cause systematic but spurious correlation structures throughout the brain. Specifically, many long-distance correlations are decreased by subject motion, whereas many short-distance correlations are increased. These changes in rs-fcMRI correlations do not arise from, nor are they adequately countered by, some common functional connectivity processing steps. Two indices of data quality are proposed, and a simple method to reduce motion-related effects in rs-fcMRI analyses is demonstrated that should be flexibly implementable across a variety of software platforms. We demonstrate how application of this technique impacts our own data, modifying previous conclusions about brain development. These results suggest the need for greater care in dealing with subject motion, and the need to critically revisit previous rs-fcMRI work that may not have adequately controlled for effects of transient subject movements. PMID:22019881
Monochromatic radio frequency accelerating cavity
Giordano, S.
1984-02-09
A radio frequency resonant cavity having a fundamental resonant frequency and characterized by being free of spurious modes. A plurality of spaced electrically conductive bars are arranged in a generally cylindrical array within the cavity to define a chamber between the bars and an outer solid cylindrically shaped wall of the cavity. A first and second plurality of mode perturbing rods are mounted in two groups at determined random locations to extend radially and axially into the cavity thereby to perturb spurious modes and cause their fields to extend through passageways between the bars and into the chamber. At least one body of lossy material is disposed within the chamber to damp all spurious modes that do extend into the chamber thereby enabling the cavity to operate free of undesired spurious modes.
Monochromatic radio frequency accelerating cavity
Giordano, Salvatore
1985-01-01
A radio frequency resonant cavity having a fundamental resonant frequency and characterized by being free of spurious modes. A plurality of spaced electrically conductive bars are arranged in a generally cylindrical array within the cavity to define a chamber between the bars and an outer solid cylindrically shaped wall of the cavity. A first and second plurality of mode perturbing rods are mounted in two groups at determined random locations to extend radially and axially into the cavity thereby to perturb spurious modes and cause their fields to extend through passageways between the bars and into the chamber. At least one body of lossy material is disposed within the chamber to damp all spurious modes that do extend into the chamber thereby enabling the cavity to operate free of undesired spurious modes.
The critical angle in seismic interferometry
Van Wijk, K.; Calvert, A.; Haney, M.; Mikesell, D.; Snieder, R.
2008-01-01
Limitations with respect to the characteristics and distribution of sources are inherent to any field seismic experiment, but in seismic interferometry these lead to spurious waves. Instead of trying to eliminate, filter or otherwise suppress spurious waves, crosscorrelation of receivers in a refraction experiment indicate we can take advantage of spurious events for near-surface parameter extraction for static corrections or near-surface imaging. We illustrate this with numerical examples and a field experiment from the CSM/Boise State University Geophysics Field Camp.
Spurious Excitations in Semiclassical Scattering Theory.
ERIC Educational Resources Information Center
Gross, D. H. E.; And Others
1980-01-01
Shows how through proper handling of the nonuniform motion of semiclassical coordinates spurious excitation terms are eliminated. An application to the problem of nuclear Coulomb excitation is presented as an example. (HM)
Upwind schemes and bifurcating solutions in real gas computations
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.
NASA Astrophysics Data System (ADS)
Garcia-Allende, Pilar Beatriz; Conde, Olga M.; Madruga, Francisco J.; Cubillas, Ana M.; Lopez-Higuera, Jose M.
2008-03-01
A non-intrusive infrared sensor for the detection of spurious elements in an industrial raw material chain has been developed. The system is an extension to the whole near infrared range of the spectrum of a previously designed system based on the Vis-NIR range (400 - 1000 nm). It incorporates a hyperspectral imaging spectrograph able to register simultaneously the NIR reflected spectrum of the material under study along all the points of an image line. The working material has been different tobacco leaf blends mixed with typical spurious elements of this field such as plastics, cardboards, etc. Spurious elements are discriminated automatically by an artificial neural network able to perform the classification with a high degree of accuracy. Due to the high amount of information involved in the process, Principal Component Analysis is first applied to perform data redundancy removal. By means of the extension to the whole NIR range of the spectrum, from 1000 to 2400 nm, the characterization of the material under test is highly improved. The developed technique could be applied to the classification and discrimination of other materials, and, as a consequence of its non-contact operation it is particularly suitable for food quality control.
Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.
Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T
2017-07-01
Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.
Locating Microseism Sources Using Spurious Arrivals in Intercontinental Noise Correlations
NASA Astrophysics Data System (ADS)
Retailleau, Lise; Boué, Pierre; Stehly, Laurent; Campillo, Michel
2017-10-01
The accuracy of Green's functions retrieved from seismic noise correlations in the microseism frequency band is limited by the uneven distribution of microseism sources at the surface of the Earth. As a result, correlation functions are often biased as compared to the expected Green's functions, and they can include spurious arrivals. These spurious arrivals are seismic arrivals that are visible on the correlation and do not belong to the theoretical impulse response. In this article, we propose to use Rayleigh wave spurious arrivals detected on correlation functions computed between European and United States seismic stations to locate microseism sources in the Atlantic Ocean. We perform a slant stack on a time distance gather of correlations obtained from an array of stations that comprises a regional deployment and a distant station. The arrival times and the apparent slowness of the spurious arrivals lead to the location of their source, which is obtained through a grid search procedure. We discuss improvements in the location through this methodology as compared to classical back projection of microseism energy. This method is interesting because it only requires an array and a distant station on each side of an ocean, conditions that can be met relatively easily.
NASA Astrophysics Data System (ADS)
Pholele, T. M.; Chuma, J. M.
2016-03-01
The effects of conductor disc in a dielectric loaded combline resonator on its spurious performance, unloaded quality factor (Qu), and coupling coefficients are analysed using a commercial electromagnetic software package CST Microwave Studio (CST MWS). The disc improves the spurious free band but simultaneously deteriorates the Qu. The presence of the disc substantially improves the electric coupling by a factor of 1.891 for an aperture opening of 12 mm, while it has insignificant effect on the magnetic coupling.
Lippi, Giuseppe; Cervellin, Gianfranco; Mattiuzzi, Camilla
2013-01-01
Background: A number of preanalytical activities strongly influence sample quality, especially those related to sample collection. Since blood drawing through intravenous catheters is reported as a potential source of erythrocyte injury, we performed a critical review and meta-analysis about the risk of catheter-related hemolysis. Materials and methods: We performed a systematic search on PubMed, Web of Science and Scopus to estimate the risk of spurious hemolysis in blood samples collected from intravenous catheters. A meta-analysis with calculation of Odds ratio (OR) and Relative risk (RR) along with 95% Confidence interval (95% CI) was carried out using random effect mode. Results: Fifteen articles including 17 studies were finally selected. The total number of patients was 14,796 in 13 studies assessing catheter and evacuated tubes versus straight needle and evacuated tubes, and 1251 in 4 studies assessing catheter and evacuated tubes versus catheter and manual aspiration. A significant risk of hemolysis was found in studies assessing catheter and evacuated tubes versus straight needle and evacuated tubes (random effect OR 3.4; 95% CI = 2.9–3.9 and random effect RR 1.07; 95% CI = 1.06–1.08), as well as in studies assessing catheter and evacuated tubes versus catheter and manual aspiration of blood (OR 3.7; 95% CI = 2.7–5.1 and RR 1.32; 95% CI = 1.24–1.40). Conclusions: Sample collection through intravenous catheters is associated with significant higher risk of spurious hemolysis as compared with standard blood drawn by straight needle, and this risk is further amplified when intravenous catheter are associated with primary evacuated blood tubes as compared with manual aspiration. PMID:23894864
García-González, Elena; González-Tarancón, Ricardo; Aramendía, Maite; Rello, Luis
2016-08-01
Monoclonal (M) components can interfere with the direct bilirubin (D-Bil) assay on the AU Beckman Coulter instrumentation and produce spurious results, such as D-Bil values greater than total bilirubin (T-Bil) or very low/negative D-Bil values. If properly detected, this interference may uncover undiagnosed patients with monoclonal gammopathy (MG). We investigated the interference rate on the D-Bil AU assay in serum samples known to contain M proteins along with their isotype and described the protocol set up in our laboratory to help with the diagnosis of MG based on D-Bil spurious results as first indication. During a period of 4 years, 15.4% (345 of 2235) of serum samples containing M immunoglobulins produced erroneous D-Bil results, although no clear relationship between the magnitude or isotype of the M component and interference could be found. In total 22 new patients were diagnosed with MG based on the analytical artefact with the D-Bil as first indication. The D-Bil interference from MG on the Beckman AU analysers needs to be made known to laboratories in order to prevent clinical confusion and/or additional workup to explain the origin of anomalous results. Although this information may not add to the management of existing patients with serum paraproteins, it can benefit patients that have not been diagnosed with MG by triggering follow up testing to determine if M components are present.
Kwon, Kun-Sup; Yoon, Won-Sang
2010-01-01
In this paper we propose a method of removing from synthesizer output spurious signals due to quasi-amplitude modulation and superposition effect in a frequency-hopping synthesizer with direct digital frequency synthesizer (DDFS)-driven phase-locked loop (PLL) architecture, which has the advantages of high frequency resolution, fast transition time, and small size. There are spurious signals that depend on normalized frequency of DDFS. They can be dominant if they occur within the PLL loop bandwidth. We suggest that such signals can be eliminated by purposefully creating frequency errors in the developed synthesizer.
A Dual Mode BPF with Improved Spurious Response Using DGS Cells Embedded on the Ground Plane of CPW
NASA Astrophysics Data System (ADS)
Weng, Min-Hang; Ye, Chang-Sin; Hung, Cheng-Yuan; Huang, Chun-Yueh
A novel dual mode bandpass filter (BPF) with improved spurious response is presented in this letter. To obtain low insertion loss, the coupling structure using the dual mode resonator and the feeding scheme using coplanar-waveguide (CPW) are constructed on the two sides of a dielectric substrate. A defected ground structure (DGS) is designed on the ground plane of the CPW to achieve the goal of spurious suppression of the filter. The filter has been investigated numerically and experimentally. Measured results show a good agreement with the simulated analysis.
The numerical dynamic for highly nonlinear partial differential equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.
Specimen Holder for Analytical Electron Microscopes
NASA Technical Reports Server (NTRS)
Clanton, U. S.; Isaacs, A. M.; Mackinnon, I.
1985-01-01
Reduces spectral contamination by spurious X-ray. Specimen holder made of compressed carbon, securely retains standard electron microscope grid (disk) 3 mm in diameter and absorbs backscattered electrons that otherwise generate spurious X-rays. Since holder inexpensive, dedicated to single specimen when numerous samples examined.
Achour, Brahim; Dantonio, Alyssa; Niosi, Mark; Novak, Jonathan J; Fallon, John K; Barber, Jill; Smith, Philip C; Rostami-Hodjegan, Amin; Goosen, Theunis C
2017-10-01
Quantitative characterization of UDP-glucuronosyltransferase (UGT) enzymes is valuable in glucuronidation reaction phenotyping, predicting metabolic clearance and drug-drug interactions using extrapolation exercises based on pharmacokinetic modeling. Different quantitative proteomic workflows have been employed to quantify UGT enzymes in various systems, with reports indicating large variability in expression, which cannot be explained by interindividual variability alone. To evaluate the effect of methodological differences on end-point UGT abundance quantification, eight UGT enzymes were quantified in 24 matched liver microsomal samples by two laboratories using stable isotope-labeled (SIL) peptides or quantitative concatemer (QconCAT) standard, and measurements were assessed against catalytic activity in seven enzymes ( n = 59). There was little agreement between individual abundance levels reported by the two methods; only UGT1A1 showed strong correlation [Spearman rank order correlation (Rs) = 0.73, P < 0.0001; R 2 = 0.30; n = 24]. SIL-based abundance measurements correlated well with enzyme activities, with correlations ranging from moderate for UGTs 1A6, 1A9, and 2B15 (Rs = 0.52-0.59, P < 0.0001; R 2 = 0.34-0.58; n = 59) to strong correlations for UGTs 1A1, 1A3, 1A4, and 2B7 (Rs = 0.79-0.90, P < 0.0001; R 2 = 0.69-0.79). QconCAT-based data revealed generally poor correlation with activity, whereas moderate correlations were shown for UGTs 1A1, 1A3, and 2B7. Spurious abundance-activity correlations were identified in the cases of UGT1A4/2B4 and UGT2B7/2B15, which could be explained by correlations of protein expression between these enzymes. Consistent correlation of UGT abundance with catalytic activity, demonstrated by the SIL-based dataset, suggests that quantitative proteomic data should be validated against catalytic activity whenever possible. In addition, metabolic reaction phenotyping exercises should consider spurious abundance-activity correlations to avoid misleading conclusions. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang
In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.
NASA Astrophysics Data System (ADS)
Asghar, Haroon; McInerney, John G.
2017-09-01
We demonstrate an asymmetric dual-loop feedback scheme to suppress external cavity side-modes induced in self-mode-locked quantum-dash lasers with conventional single and dual-loop feedback. In this letter, we achieved optimal suppression of spurious tones by optimizing the length of second delay time. We observed that asymmetric dual-loop feedback, with large (~8x) disparity in cavity lengths, eliminates all external-cavity side-modes and produces flat RF spectra close to the main peak with low timing jitter compared to single-loop feedback. Significant reduction in RF linewidth and reduced timing jitter was also observed as a function of increased second feedback delay time. The experimental results based on this feedback configuration validate predictions of recently published numerical simulations. This interesting asymmetric dual-loop feedback scheme provides simplest, efficient and cost effective stabilization of side-band free optoelectronic oscillators based on mode-locked lasers.
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
47 CFR 2.1051 - Measurements required: Spurious emissions at antenna terminals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... antenna terminals. 2.1051 Section 2.1051 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL... Procedures Certification § 2.1051 Measurements required: Spurious emissions at antenna terminals. The radio... checked at the equipment output terminals when properly loaded with a suitable artificial antenna. Curves...
NASA Technical Reports Server (NTRS)
Kast, J. W.
1975-01-01
We consider the design of a Kirkpatrick-Baez grazing-incidence X-ray telescope to be used in a scan of the sky and analyze the distribution of both properly reflected rays and spurious images over the field of view. To obtain maximum effective area over the field of view, it is necessary to increase the spacing between plates for a scanning telescope as compared to a pointing telescope. Spurious images are necessarily present in this type of lens, but they can be eliminated from the field of view by adding properly located baffles or collimators. Results of a computer design are presented.
Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures.
Palva, J Matias; Wang, Sheng H; Palva, Satu; Zhigalov, Alexander; Monto, Simo; Brookes, Matthew J; Schoffelen, Jan-Mathijs; Jerbi, Karim
2018-06-01
When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
Drugs and Crime: An Empirically Based, Interdisciplinary Model
ERIC Educational Resources Information Center
Quinn, James F.; Sneed, Zach
2008-01-01
This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…
Residual Negative Pressure in Vacuum Tubes Might Increase the Risk of Spurious Hemolysis.
Xiao, Tong-Tong; Zhang, Qiao-Xin; Hu, Jing; Ouyang, Hui-Zhen; Cai, Ying-Mu
2017-05-01
We planned a study to establish whether spurious hemolysis may occur when negative pressure remains in vacuum tubes. Four tubes with different vacuum levels (-54, -65, -74, and -86 kPa) were used to examine blood drawn from one healthy volunteer; the tubes were allowed to stand for different times (1, 2, 3, and 4 hours). The plasma was separated and immediately tested for free hemoglobin (FHb). Thirty patients were enrolled in a verification experiment. The degree of hemolysis observed was greater when the remaining negative pressure was higher. Significant differences were recorded in the verification experiment. The results suggest that residual negative pressure might increase the risk of spurious hemolysis.
Mode suppression means for gyrotron cavities
Chodorow, Marvin; Symons, Robert S.
1983-08-09
In a gyrotron electron tube of the gyro-klystron or gyro-monotron type, having a cavity supporting an electromagnetic mode with circular electric field, spurious resonances can occur in modes having noncircular electric field. These spurious resonances are damped and their frequencies shifted by a circular groove in the cavity parallel to the electric field.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2017-07-01
When designing a numerical scheme for the resolution of conservation laws, the selection of a particular source term discretization (STD) may seem irrelevant whenever it ensures convergence with mesh refinement, but it has a decisive impact on the solution. In the framework of the Shallow Water Equations (SWE), well-balanced STD based on quiescent equilibrium are unable to converge to physically based solutions, which can be constructed considering energy arguments. Energy based discretizations can be designed assuming dissipation or conservation, but in any case, the STD procedure required should not be merely based on ad hoc approximations. The STD proposed in this work is derived from the Generalized Hugoniot Locus obtained from the Generalized Rankine Hugoniot conditions and the Integral Curve across the contact wave associated to the bed step. In any case, the STD must allow energy-dissipative solutions: steady and unsteady hydraulic jumps, for which some numerical anomalies have been documented in the literature. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump. The former issue can be addressed by proposing a modification of the energy-conservative STD that ensures a correct dissipation rate across the hydraulic jump, whereas the latter is of greater complexity and cannot be fixed by simply choosing a suitable STD, as there are more variables involved. The problem concerning the spike of discharge is a well-known problem in the scientific community, also known as slowly-moving shock anomaly, it is produced by a nonlinearity of the Hugoniot locus connecting the states at both sides of the jump. However, it seems that this issue is more a feature than a problem when considering steady solutions of the SWE containing hydraulic jumps. The presence of the spurious spike in the discharge has been taken for granted and has become a feature of the solution. Even though it does not disturb the rest of the solution in steady cases, when considering transient cases it produces a very undesirable shedding of spurious oscillations downstream that should be circumvented. Based on spike-reducing techniques (originally designed for homogeneous Euler equations) that propose the construction of interpolated fluxes in the untrustworthy regions, we design a novel Roe-type scheme for the SWE with discontinuous topography that reduces the presence of the aforementioned spurious spike. The resulting spike-reducing method in combination with the proposed STD ensures an accurate positioning of steady jumps, provides convergence with mesh refinement, which was not possible for previous methods that cannot avoid the spike.
A System for Video Surveillance and Monitoring CMU VSAM Final Report
1999-11-30
motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single
Identification of true EST alignments for recognising transcribed regions.
Ma, Chuang; Wang, Jia; Li, Lun; Duan, Mo-Jie; Zhou, Yan-Hong
2011-01-01
Transcribed regions can be determined by aligning Expressed Sequence Tags (ESTs) with genome sequences. The kernel of this strategy is to effectively distinguish true EST alignments from spurious ones. In this study, three measures including Direction Check, Identity Check and Terminal Check were introduced to more effectively eliminate spurious EST alignments. On the basis of these introduced measures and other widely used measures, a computational tool, named ESTCleanser, has been developed to identify true EST alignments for obtaining reliable transcribed regions. The performance of ESTCleanser has been evaluated on the well-annotated human ENCyclopedia of DNA Elements (ENCODE) regions using human ESTs in the dbEST database. The evaluation results show that the accuracy of ESTCleanser at exon and intron levels is more remarkably enhanced than that of UCSC-spliced EST alignments. This work would be helpful to EST-based researches on finding new genes, complementing genome annotation, recognising alternative splicing events and Single Nucleotide Polymorphisms (SNPs), etc.
Computations of spray, fuel-air mixing, and combustion in a lean-premixed-prevaporized combustor
NASA Technical Reports Server (NTRS)
Dasgupta, A.; Li, Z.; Shih, T. I.-P.; Kundu, K.; Deur, J. M.
1993-01-01
A code was developed for computing the multidimensional flow, spray, combustion, and pollutant formation inside gas turbine combustors. The code developed is based on a Lagrangian-Eulerian formulation and utilizes an implicit finite-volume method. The focus of this paper is on the spray part of the code (both formulation and algorithm), and a number of issues related to the computation of sprays and fuel-air mixing in a lean-premixed-prevaporized combustor. The issues addressed include: (1) how grid spacings affect the diffusion of evaporated fuel, and (2) how spurious modes can arise through modelling of the spray in the Lagrangian computations. An upwind interpolation scheme is proposed to account for some effects of grid spacing on the artificial diffusion of the evaporated fuel. Also, some guidelines are presented to minimize errors associated with the spurious modes.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
Long-memory and the sea level-temperature relationship: a fractional cointegration approach.
Ventosa-Santaulària, Daniel; Heres, David R; Martínez-Hernández, L Catalina
2014-01-01
Through thermal expansion of oceans and melting of land-based ice, global warming is very likely contributing to the sea level rise observed during the 20th century. The amount by which further increases in global average temperature could affect sea level is only known with large uncertainties due to the limited capacity of physics-based models to predict sea levels from global surface temperatures. Semi-empirical approaches have been implemented to estimate the statistical relationship between these two variables providing an alternative measure on which to base potentially disrupting impacts on coastal communities and ecosystems. However, only a few of these semi-empirical applications had addressed the spurious inference that is likely to be drawn when one nonstationary process is regressed on another. Furthermore, it has been shown that spurious effects are not eliminated by stationary processes when these possess strong long memory. Our results indicate that both global temperature and sea level indeed present the characteristics of long memory processes. Nevertheless, we find that these variables are fractionally cointegrated when sea-ice extent is incorporated as an instrumental variable for temperature which in our estimations has a statistically significant positive impact on global sea level.
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
2015-01-01
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods. PMID:25938136
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules.
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lauermann, M.; Weimann, C.; Palmer, R.
2014-05-27
We demonstrate a waveguide-based frequency shifter on the silicon photonic platform, enabling frequency shifts up to 10 GHz. The device is realized by silicon-organic hybrid (SOH) integration. Temporal shaping of the drive signal allows the suppression of spurious side-modes by more than 23 dB.
Should Children Have Best Friends?
ERIC Educational Resources Information Center
Healy, Mary
2017-01-01
An important theme in the philosophy of education community in recent years has been the way in which philosophy can be brought to illuminate and evaluate research findings from the landscape of policy and practice. Undoubtedly, some of these practices can be based on spurious evidence, yet have mostly been left unchallenged in both philosophical…
Estimating and comparing microbial diversity in the presence of sequencing errors
Chiu, Chun-Huo
2016-01-01
Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872
Jennings, Wesley G; Park, MiRang; Richards, Tara N; Tomsich, Elizabeth; Gover, Angela; Powers, Ráchael A
2014-12-01
Child maltreatment is one of the most commonly examined risk factors for violence in dating relationships. Often referred to as the intergenerational transmission of violence or cycle of violence, a fair amount of research suggests that experiencing abuse during childhood significantly increases the likelihood of involvement in violent relationships later, but these conclusions are primarily based on correlational research designs. Furthermore, the majority of research linking childhood maltreatment and dating violence has focused on samples of young people from the United States. Considering these limitations, the current study uses a rigorous, propensity score matching approach to estimate the causal effect of experiencing child physical abuse on adult dating violence among a large sample of South Korean emerging adults. Results indicate that the link between child physical abuse and adult dating violence is spurious rather than causal. Study limitations and implications are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Alshaarawy, Omayma; Anthony, James C.
2016-01-01
Background In preclinical animal studies, evidence links cannabis smoking (CS) with hyperphagia, obesity, and insulin resistance. Nonetheless, in humans, CS might protect against type 2 diabetes mellitus (DM). Here, we offer epidemiological estimates from eight independent replications from (1) the National Health and Nutrition Examination Surveys, and (2) the National Surveys on Drug Use and Health (2005-12). Methods For each national survey participant, computer-assisted self-interviews assess CS and physician-diagnosed DM; NHANES provides additional biomarker values and a composite DM diagnosis. Regression analyses produce estimates of CS-DM associations. Meta-analyses summarize the replication estimates. Results Recently active CS and DM are inversely associated. The meta-analytic summary odds ratio is 0.7 (95% CI = 0.6, 0.8). Conclusions Current evidence is too weak for causal inference, but there now is a more stable evidence base for new lines of clinical translational research on a possibly protective (or spurious) CS-DM association suggested in prior research. PMID:25978795
Assessing nonlinear structures in real exchange rates using recurrence plot strategies
NASA Astrophysics Data System (ADS)
Belaire-Franch, Jorge; Contreras, Dulce; Tordera-Lledó, Lorena
2002-11-01
Purchasing power parity (PPP) is an important theory at the basis of a large number of economic models. However, the implication derived from the theory that real exchange rates must follow stationary processes is not conclusively supported by empirical studies. In a recent paper, Serletis and Gogas [Appl. Finance Econ. 10 (2000) 615] show evidence of deterministic chaos in several OECD exchange rates. As a consequence, PPP rejections could be spurious. In this work, we follow a two-stage testing procedure to test for nonlinearities and chaos in real exchange rates, using a new set of techniques designed by Webber and Zbilut [J. Appl. Physiol. 76 (1994) 965], called recurrence quantification analysis (RQA). Our conclusions differ slightly from Serletis and Gogas [Appl. Finance Econ. 10 (2000) 615], but they are also supportive of chaos for some exchange rates.
An auxiliary optimization method for complex public transit route network based on link prediction
NASA Astrophysics Data System (ADS)
Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian
2018-02-01
Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.
Some effects of horizontal discretization on linear baroclinic and symmetric instabilities
NASA Astrophysics Data System (ADS)
Barham, William; Bachman, Scott; Grooms, Ian
2018-05-01
The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.
NASA Astrophysics Data System (ADS)
Giono, G.; Ishikawa, R.; Narukage, N.; Kano, R.; Katsukawa, Y.; Kubo, M.; Ishikawa, S.; Bando, T.; Hara, H.; Suematsu, Y.; Winebarger, A.; Kobayashi, K.; Auchère, F.; Trujillo Bueno, J.; Tsuneta, S.; Shimizu, T.; Sakao, T.; Cirtain, J.; Champey, P.; Asensio Ramos, A.; Štěpán, J.; Belluzzi, L.; Manso Sainz, R.; De Pontieu, B.; Ichimoto, K.; Carlsson, M.; Casini, R.; Goto, M.
2017-04-01
The Chromospheric Lyman-Alpha SpectroPolarimeter is a sounding rocket instrument designed to measure for the first time the linear polarization of the hydrogen Lyman-{α} line (121.6 nm). The instrument was successfully launched on 3 September 2015 and observations were conducted at the solar disc center and close to the limb during the five-minutes flight. In this article, the disc center observations are used to provide an in-flight calibration of the instrument spurious polarization. The derived in-flight spurious polarization is consistent with the spurious polarization levels determined during the pre-flight calibration and a statistical analysis of the polarization fluctuations from solar origin is conducted to ensure a 0.014% precision on the spurious polarization. The combination of the pre-flight and the in-flight polarization calibrations provides a complete picture of the instrument response matrix, and a proper error transfer method is used to confirm the achieved polarization accuracy. As a result, the unprecedented 0.1% polarization accuracy of the instrument in the vacuum ultraviolet is ensured by the polarization calibration.
A comprehensive comparison of network similarities for link prediction and spurious link elimination
NASA Astrophysics Data System (ADS)
Zhang, Peng; Qiu, Dan; Zeng, An; Xiao, Jinghua
2018-06-01
Identifying missing interactions in complex networks, known as link prediction, is realized by estimating the likelihood of the existence of a link between two nodes according to the observed links and nodes' attributes. Similar approaches have also been employed to identify and remove spurious links in networks which is crucial for improving the reliability of network data. In network science, the likelihood for two nodes having a connection strongly depends on their structural similarity. The key to address these two problems thus becomes how to objectively measure the similarity between nodes in networks. In the literature, numerous network similarity metrics have been proposed and their accuracy has been discussed independently in previous works. In this paper, we systematically compare the accuracy of 18 similarity metrics in both link prediction and spurious link elimination when the observed networks are very sparse or consist of inaccurate linking information. Interestingly, some methods have high prediction accuracy, they tend to perform low accuracy in identification spurious interaction. We further find that methods can be classified into several cluster according to their behaviors. This work is useful for guiding future use of these similarity metrics for different purposes.
Komsa, Darya N; Staroverov, Viktor N
2016-11-08
Standard density-functional approximations often incorrectly predict that heteronuclear diatomic molecules dissociate into fractionally charged atoms. We demonstrate that these spurious charges can be eliminated by adapting the shape-correction method for Kohn-Sham potentials that was originally introduced to improve Rydberg excitation energies [ Phys. Rev. Lett. 2012 , 108 , 253005 ]. Specifically, we show that if a suitably determined fraction of electron charge is added to or removed from a frontier Kohn-Sham orbital level, the approximate Kohn-Sham potential of a stretched molecule self-corrects by developing a semblance of step structure; if this potential is used to obtain the electron density of the neutral molecule, charge delocalization is blocked and spurious fractional charges disappear beyond a certain internuclear distance.
Predicting missing links and identifying spurious links via likelihood analysis
NASA Astrophysics Data System (ADS)
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-03-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms.
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
Predicting missing links and identifying spurious links via likelihood analysis
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-01-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms. PMID:26961965
ERIC Educational Resources Information Center
Lin, Chun-Wen
2017-01-01
Aristotle once said that moral deliberation is supposed to find reasons, which may lead us to renounce spurious moral beliefs in pursuit of authentic goodness. The aim of this study was to construct a theoretical model, based on Aristotle's deliberation theory framework, representing college students' attitudes toward volunteering in nursing…
Procop, Gary W; Taege, Alan J; Starkey, Colleen; Tungsiripat, Marisa; Warner, Diane; Schold, Jesse D; Yen-Lieberman, Belinda
2017-09-01
The processing of specimens often occurs in a central processing area within laboratories. We demonstrated that plasma centrifuged in the central laboratory but allowed to remain within the primary tube following centrifugation was associated with spuriously elevated HIV viral loads compared with recentrifugation of the plasma just prior to testing. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud
2018-04-01
Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1995-01-01
The global asymptotic nonlinear behavior of 11 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDEs.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1995-01-01
The global asymptotic nonlinear behavior of 1 1 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODES) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDES.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
Quantifying spatial distribution of spurious mixing in ocean models.
Ilıcak, Mehmet
2016-12-01
Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habel, R.; Letardi, T.
1963-10-30
In some studies with scintillation chambers, the problem of discriminating between the events generated by one or more ionizing particles and a spontaneous shower between the gaps of the chamber is presented. One element of difference between the two events is the delay of the spurious scintillation with respect to that produced by passage of a particle. The use of a fast shutter whose open time is of the order of the delay would provide a possible method for the discrimination between true and spurious events. The experimental apparatus used and the types of measurements made to determine if suchmore » a shutter arrangement would be feasible are described. (J.S.R.)« less
Smith, Geoff; Murray, Heather; Brennan, Stephen O
2013-01-01
Commonly used methods for assay of haemoglobin A(1c) (HbA(1c)) are susceptible to interference from the presence of haemoglobin variants. In many systems, the common variants can be identified but scientists and pathologists must remain vigilant for more subtle variants that may result in spuriously high or low HbA(1c) values. It is clearly important to recognize these events whether HbA(1c) is being used as a monitoring tool or, as is increasingly the case, for diagnostic purposes. We report a patient with a rare haemoglobin variant (Hb Sinai-Baltimore) that resulted in spuriously low values of HbA(1c) when assayed using ion exchange chromatography, and the steps taken to elucidate the nature of the variant.
Lahey, Benjamin B; Van Hulle, Carol A; D'Onofrio, Brian M; Rodgers, Joseph Lee; Waldman, Irwin D
2008-08-01
Recent studies suggest that most of what parents know about their adolescent offspring's whereabouts and companions is the result of youth disclosure, rather than information gained through active parental monitoring. This raises the possibility that parental knowledge is spuriously correlated with youth delinquency solely because the most delinquent youth disclose the least information to parents (because they have the most to hide). We tested this spurious association hypothesis using prospective data on offspring of a nationally representative sample of US women, controlling demographic and contextual covariates. In separate analyses, greater parental knowledge of their offspring's peer associations at both 12-13 years and at 14-15 years was associated with lower odds of being in the top 1 standard deviation of youth-reported delinquency at 16-17 years, controlling for delinquency at the earlier ages. The extent to which parents set limits on activities with peers at 14-15 years did not mediate or moderate the association between parental knowledge and delinquency, but it did independently predict future delinquency among adolescents living in high-risk neighborhoods. This suggests that the association between parental knowledge and future delinquency is not solely spurious; rather parental knowledge and limit setting are both meaningful predictors of future delinquency.
Godon, Alban; Genevieve, Franck; Marteau-Tessier, Anne; Zandecki, Marc
2012-01-01
Several situations lead to abnormal haemoglobin measurement or to abnormal red blood cells (RBC) counts, including hyperlipemias, agglutinins and cryoglobulins, haemolysis, or elevated white blood cells (WBC) counts. Mean (red) cell volume may be also subject to spurious determination, because of agglutinins (mainly cold), high blood glucose level, natremia, anticoagulants in excess and at times technological considerations. Abnormality related to one measured parameter eventually leads to abnormal calculated RBC indices: mean cell haemoglobin content is certainly the most important RBC parameter to consider, maybe as important as flags generated by the haematology analysers (HA) themselves. In many circumstances, several of the measured parameters from cell blood counts (CBC) may be altered, and the discovery of a spurious change on one parameter frequently means that the validity of other parameters should be considered. Sensitive flags allow now the identification of several spurious counts, but only the most sophisticated HA have optimal flagging, and simpler ones, especially those without any WBC differential scattergram, do not share the same capacity to detect abnormal results. Reticulocytes are integrated into the CBC in many HA, and several situations may lead to abnormal counts, including abnormal gating, interference with intraerythrocytic particles, erythroblastosis or high WBC counts.
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.
Patounakis, George; Hill, Micah J
2018-06-01
The purpose of the current review is to describe the common pitfalls in design and statistical analysis of reproductive medicine studies. It serves to guide both authors and reviewers toward reducing the incidence of spurious statistical results and erroneous conclusions. The large amount of data gathered in IVF cycles leads to problems with multiplicity, multicollinearity, and over fitting of regression models. Furthermore, the use of the word 'trend' to describe nonsignificant results has increased in recent years. Finally, methods to accurately account for female age in infertility research models are becoming more common and necessary. The pitfalls of study design and analysis reviewed provide a framework for authors and reviewers to approach clinical research in the field of reproductive medicine. By providing a more rigorous approach to study design and analysis, the literature in reproductive medicine will have more reliable conclusions that can stand the test of time.
Response to Comment on "Does the Earth Have an Adaptive Infrared IRIS?"
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Chou, Ming-Dah; Lindzen, Richard S.; Hou, Arthur Y.
2001-01-01
Harrison's (2001) Comment on the Methodology in Lindzen et al (2001) has prompted re-examination of several aspects of study. Probably the most significant disagreement in our conclusions is due to our different approaches to minimizing the influence of long-time-scale variations in the variables A and T on the results. Given the strength of the annual cycle and the 20-month period covered by the data, we believe that removing monthly means is a better approach to minimizing the long-time-scale behavior of the data than removal of the linear trend, which might actually add spurious long- time- scale variability into the modified data. We have also indicated how our statistical methods of establishing statistical significance differ. More definitive conclusions may only possible after more data have been analyzed, but we feel that our results are robust enough to encourage further study of this phenomenon.
No Conclusive Evidence for Transits of Proxima b in MOST Photometry
NASA Astrophysics Data System (ADS)
Kipping, David M.; Cameron, Chris; Hartman, Joel D.; Davenport, James R. A.; Matthews, Jaymie M.; Sasselov, Dimitar; Rowe, Jason; Siverd, Robert J.; Chen, Jingjing; Sandford, Emily; Bakos, Gáspár Á.; Jordán, Andrés; Bayliss, Daniel; Henning, Thomas; Mancini, Luigi; Penev, Kaloyan; Csubry, Zoltan; Bhatti, Waqas; Da Silva Bento, Joao; Guenther, David B.; Kuschnig, Rainer; Moffat, Anthony F. J.; Rucinski, Slavek M.; Weiss, Werner W.
2017-03-01
The analysis of Proxima Centauri’s radial velocities recently led Anglada-Escudé et al. to claim the presence of a low-mass planet orbiting the Sun’s nearest star once every 11.2 days. Although the a priori probability that Proxima b transits its parent star is just 1.5%, the potential impact of such a discovery would be considerable. Independent of recent radial velocity efforts, we observed Proxima Centauri for 12.5 days in 2014 and 31 days in 2015 with the Microwave and Oscillations of Stars space telescope. We report here that we cannot make a compelling case that Proxima b transits in our precise photometric time series. Imposing an informative prior on the period and phase, we do detect a candidate signal with the expected depth. However, perturbing the phase prior across 100 evenly spaced intervals reveals one strong false positive and one weaker instance. We estimate a false-positive rate of at least a few percent and a much higher false-negative rate of 20%-40%, likely caused by the very high flare rate of Proxima Centauri. Comparing our candidate signal to HATSouth ground-based photometry reveals that the signal is somewhat, but not conclusively, disfavored (1σ-2σ), leading us to argue that the signal is most likely spurious. We expect that infrared photometric follow-up could more conclusively test the existence of this candidate signal, owing to the suppression of flare activity and the impressive infrared brightness of the parent star.
Turner, T H; Renfroe, J B; Elm, J; Duppstadt-Delambo, A; Hinson, V K
2016-01-01
Ability to identify change is crucial for measuring response to interventions and tracking disease progression. Beyond psychometrics, investigations of Parkinson's disease with mild cognitive impairment (PD-MCI) must consider fluctuating medication, motor, and mental status. One solution is to employ 90% reliable change indices (RCIs) from test manuals to account for account measurement error and practice effects. The current study examined robustness of 90% RCIs for 19 commonly used executive function tests in 14 PD-MCI subjects assigned to the placebo arm of a 10-week randomized controlled trial of atomoxetine in PD-MCI. Using 90% RCIs, the typical participant showed spurious improvement on one measure, and spurious decline on another. Reliability estimates from healthy adults standardization samples and PD-MCI were similar. In contrast to healthy adult samples, practice effects were minimal in this PD-MCI group. Separate 90% RCIs based on the PD-MCI sample did not further reduce error rate. In the present study, application of 90% RCIs based on healthy adults in standardization samples effectively reduced misidentification of change in a sample of PD-MCI. Our findings support continued application of 90% RCIs when using executive function tests to assess change in neurological populations with fluctuating status.
2015-03-06
was formed by ZrO2 rounded grains containing W traces and covered by H3BO3 acicular crystals deriving from hydration of B2O3 after exposure to...TaSi2 grains tended to form large pockets as wide as 3-8 m. Other spurious phases formed upon decomposition of the additive, were identified as SiC
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
Current Scenario of Spurious and Substandard Medicines in India: A Systematic Review
Khan, A. N.; Khar, R. K.
2015-01-01
Globally, every country is the victim of substandard or spurious drugs, which result in life threatening issues, financial loss of consumer and manufacturer and loss in trust on health system. The aim of this enumerative review was to probe the extent on poor quality drugs with their consequences on public health and the preventive measures taken by the Indian pharmaceutical regulatory system. Government and non-government studies, literature and news were gathered from journals and authentic websites. All data from 2000 to 2013 were compiled and interpreted to reveal the real story of poor quality drugs in India. For minimizing spurious/falsely-labelled/falsified/counterfeit drugs or not of standard quality drugs, there is urgent requirement of more stringent regulation and legal action against the problem. However, India has taken some preventive steps in the country to fight against the poor quality drugs for protecting and promoting the public health. PMID:25767312
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.
2013-08-01
This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also,more » the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.« less
Ordered delinquency: the "effects" of birth order on delinquency.
Cundiff, Patrick R
2013-08-01
Juvenile delinquency has long been associated with birth order in popular culture. While images of the middle child acting out for attention or the rebellious youngest child readily spring to mind, little research has attempted to explain why. Drawing from Adlerian birth order theory and Sulloway's born-to-rebel hypothesis, I examine the relationship between birth order and a variety of delinquent outcomes during adolescence. Following some recent research on birth order and intelligence, I use new methods that allow for the examination of between-individual and within-family differences to better address the potential spurious relationship. My findings suggest that contrary to popular belief, the relationship between birth order and delinquency is spurious. Specifically, I find that birth order effects on delinquency are spurious and largely products of the analytic methods used in previous tests of the relationship. The implications of this finding are discussed.
Apodization of spurs in radar receivers using multi-channel processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.
The various technologies presented herein relate to identification and mitigation of spurious energies or signals (aka "spurs") in radar imaging. Spurious energy in received radar data can be a consequence of non-ideal component and circuit behavior. Such behavior can result from I/Q imbalance, nonlinear component behavior, additive interference (e.g. cross-talk, etc.), etc. The manifestation of the spurious energy in a radar image (e.g., a range-Doppler map) can be influenced by appropriate pulse-to-pulse phase modulation. Comparing multiple images which have been processed using the same data but of different signal paths and modulations enables identification of undesired spurs, with subsequent croppingmore » or apodization of the undesired spurs from a radar image. Spurs can be identified by comparison with a threshold energy. Removal of an undesired spur enables enhanced identification of true targets in a radar image.« less
Ordered Delinquency: The “Effects” of Birth Order On Delinquency
Cundiff, Patrick R.
2014-01-01
Juvenile delinquency has long been associated with birth order in popular culture. While images of the middle child acting out for attention or the rebellious youngest child readily spring to mind, little research has attempted to explain why. Drawing from Adlerian birth order theory and Sulloway's born to rebel hypothesis I examine the relationship between birth order and a variety of delinquent outcomes during adolescence. Following some recent research on birth order and intelligence, I use new methods that allow for the examination of both between-individual and within-family differences to better address the potential spurious relationship. My findings suggest that contrary to popular belief the relationship between birth order and delinquency is spurious. Specifically, I find that birth order effects on delinquency are spurious and largely products of the analytic methods used in previous tests of the relationship. The implications of this finding are discussed. PMID:23719623
NASA Astrophysics Data System (ADS)
Ji, Xiaojun; Xiao, Qiang; Chen, Jing; Wang, Hualei; Omori, Tatsuya; Changjun, Ahn
2017-05-01
The propagation characteristics of surface acoustic waves (SAWs) on rotated Y-cut X-propagating 0.67Pb(Mg1/3Nb2/3)O3-0.33PbTiO3(PMN-33%PT) substrate are theoretically analyzed. It is shown that besides the existence of a shear horizontal (SH) SAW with ultrahigh electromechanical coupling factor K2, a Rayleigh SAW also exists causing strong spurious response. The calculated results showed that the spurious Rayleigh SAW can be sufficiently suppressed by properly selecting electrode and its thickness with optimized rotating angle while maintaining large K2 of SH SAW. The fractional -3 dB bandwidth of 47% is achievable for the ladder type filter constructed by Au IDT/48oYX-PMN-33%PT resonators.
Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers
NASA Astrophysics Data System (ADS)
Febres, Mijail; Legendre, Dominique
2018-04-01
The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.
The role of spurious correlation in the development of a komatiite alteration model
NASA Astrophysics Data System (ADS)
Butler, John C.
1986-03-01
Current procedures for assessing the degree of alteration in komatiites stress the construction of variation diagrams in which ratios of molecular proportions of the oxides are the axes of reference. For example, it has been argued that unaltered komatiites related to each other by olivine fractionation will display a linear variation with a slope of 0.5 in the space defined by [SiO2/TiO2] and [(MgO+FeO)/TiO2]. Extensive metasomatism is expected to destroy such a consistent pattern. Previous workers have tended to make use of ratios that have a common denominator. It has been known for a long time that ratios formed from uncorrelated variables will be correlated (a so-called spurious correlation) if both ratios have a common denominator. The magnitude of this spurious correlation is a function of the coefficients of variation of the measured amounts of the variables. If the denominator component has a coefficient of variation that is larger than those of the numerator components, the spurious correlation will be close to unity; that is, there will be nearly a straight-line relationship. As a demonstration, a fictitious data set has been simulated so that the means and variances of SiO2, TiO2, and (MgO + FeO) match those of an observed data set but the components themselves are uncorrelated. A plot of (SiO2/TiO2) versus [(MgO + FeO)/TiO2] of these simulated data produces a distribution of points that appears every bit as convincing an illustration of the lack of significant metasomatism as does the plot of the observed data. The assessment of the strength of linear association is a test of the observed correlation against an expected value (the null value) of zero. When a spurious correlation arises as a result of the formulation of ratios with a common denominator, zero is clearly an inappropriate choice as the null. It can be argued that the spurious correlation is, in fact, a more suitable null value. An analysis of komatiites from Gorgona Island and the Barberton suite reveals that the strong linear association could have been produced by forming ratios from uncorrelated starting chemical components. Ratios without parts in common are to be preferred in the construction of petrogenetic models.
NASA Technical Reports Server (NTRS)
Knyazikhin, Yuri; Lewis, Philip; Disney, Mathias I.; Stenberg, Pauline; Mottus, Matti; Rautianinen, Miina; Kaufmann, Robert K.; Marshak, Alexander; Schull, Mitchell A.; Carmona, Pedro Latorre;
2013-01-01
Townsend et al. (1) agree that we explained that the apparent relationship (2) between foliar nitrogen (%N) and near-infrared (NIR) canopy reflectance was largely attributable to structure (which is in turn caused by variation in fraction of broadleaf canopy). Our conclusion that the observed correlation with %N was spurious (i.e., lacking a causal basis) is, thus, clearly justified: we demonstrated that structure explained the great majority of observed correlation, where the structural influence was derived precisely via reconciling the observed correlation with radiative-transfer theory. What this also suggests is that such correlations, although observed, do not uniquely provide information on canopy biochemical constituents.
Landscape community genomics: understanding eco-evolutionary processes in complex environments
Hand, Brian K.; Lowe, Winsor H.; Kovach, Ryan P.; Muhlfeld, Clint C.; Luikart, Gordon
2015-01-01
Extrinsic factors influencing evolutionary processes are often categorically lumped into interactions that are environmentally (e.g., climate, landscape) or community-driven, with little consideration of the overlap or influence of one on the other. However, genomic variation is strongly influenced by complex and dynamic interactions between environmental and community effects. Failure to consider both effects on evolutionary dynamics simultaneously can lead to incomplete, spurious, or erroneous conclusions about the mechanisms driving genomic variation. We highlight the need for a landscape community genomics (LCG) framework to help to motivate and challenge scientists in diverse fields to consider a more holistic, interdisciplinary perspective on the genomic evolution of multi-species communities in complex environments.
Issues in characterizing resting energy expenditure in obesity and after weight loss
Bosy-Westphal, Anja; Braun, Wiebke; Schautz, Britta; Müller, Manfred J.
2013-01-01
Limitations of current methods: Normalization of resting energy expenditure (REE) for body composition using the 2-compartment model fat mass (FM), and fat-free mass (FFM) has inherent limitations for the interpretation of REE and may lead to erroneous conclusions when comparing people with a wide range of adiposity as well as before and after substantial weight loss. Experimental objectives: We compared different methods of REE normalization: (1) for FFM and FM (2) by the inclusion of %FM as a measure of adiposity and (3) based on organ and tissue masses. Results were compared between healthy subjects with different degrees of adiposity as well as within subject before and after weight loss. Results: Normalizing REE from an “REE vs. FFM and FM equation” that (1) was derived in obese participants and applied to lean people or (2) was derived before weight loss and applied after weight loss leads to the erroneous conclusion of a lower metabolic rate (i) in lean persons and (ii) after weight loss. This is revealed by the normalization of REE for organ and tissue masses that was not significantly different between lean and obese or between baseline and after weight loss. There is evidence for an increasing specific metabolic rate of FFM with increasing %FM that could be explained by a higher contribution of liver, kidney and heart mass to FFM in obesity. Using “REE vs. FFM and FM equations” specific for different levels of adiposity (%FM) eliminated differences in REE before and after weight loss in women. Conclusion: The most established method for normalization of REE based on FFM and FM may lead to spurious conclusions about metabolic rate in obesity and the phenomenon of weight loss-associated adaptive thermogenesis. Using %FM-specific REE prediction from FFM and FM in kg may improve the normalization of REE when subjects with wide differences in %FM are investigated. PMID:23532370
Elementary dispersion analysis of some mimetic discretizations on triangular C-grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korn, P., E-mail: peter.korn@mpimet.mpg.de; Danilov, S.; A.M. Obukhov Institute of Atmospheric Physics, Moscow
2017-02-01
Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noisemore » in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.« less
Entropy-Based Approach To Nonlinear Stability
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1991-01-01
NASA technical memorandum suggests schemes for numerical solution of differential equations of flow made more accurate and robust by invoking second law of thermodynamics. Proposes instead of using artificial viscosity to suppress such unphysical solutions as spurious numerical oscillations and nonlinear instabilities, one should formulate equations so that rate of production of entropy within each cell of computational grid be nonnegative, as required by second law.
Yamaguchi, Naohito
2013-01-01
The International Agency for Research on Cancer of World Health Organization announced in May 2011 the results of evaluation of carcinogenicity of radio-frequency electromagnetic field. In the overall evaluation, the radio-frequency electromagnetic field was classified as "possibly carcinogenic to humans", on the basis of the fact that the evidence provided by epidemiological studies and animal bioassays was limited. Regarding epidemiology, the results of the Interphone Study, an international collaborative case-control study, were of special importance, together with the results of a prospective cohort study in Denmark, case-control studies in several countries, and a case-case study in Japan. The evidence obtained was considered limited, because the increased risk observed in some studies was possibly spurious, caused by selection bias or recall bias as well as residual effects of confounding factors. Further research studies, such as large-scale multinational epidemiological studies, are crucially needed to establish a sound evidence base from which a more conclusive judgment can be made for the carcinogenicity of the radio-frequency electromagnetic field.
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies
Kuo, Chia-Ling; Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease. PMID:25955023
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies.
Kuo, Chia-Ling; Vsevolozhskaya, Olga A; Zaykin, Dmitri V
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease.
The role of spurious correlation in the development of a komatiite alteration model
NASA Astrophysics Data System (ADS)
Butler, John C.
1986-11-01
Procedures for detecting alterations in komatiites are described. The research of Pearson (1897) on spurious correlation and of Chayes (1949, 1971) on ratio correlation is reviewed. The equations for the ratio correlation procedure are provided. The ratio correlation procedure is applied to the komatiites from Gorgona Island and the Barberton suite. Plots of the molecular proportion ratios of (FeO + MgO)/TiO2 versus SiO2/TiO2, and correlation coefficients for the komatiites are presented and analyzed.
Reasoning about energy in qualitative simulation
NASA Technical Reports Server (NTRS)
Fouche, Pierre; Kuipers, Benjamin J.
1992-01-01
While possible behaviors of a mechanism that are consistent with an incomplete state of knowledge can be predicted through qualitative modeling and simulation, spurious behaviors corresponding to no solution of any ordinary differential equation consistent with the model may be generated. The present method for energy-related reasoning eliminates an important source of spurious behaviors, as demonstrated by its application to a nonlinear, proportional-integral controlled. It is shown that such qualitative properties of such a system as stability and zero-offset control are captured by the simulation.
Nonlinear dynamics and numerical uncertainties in CFD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.
NASA Astrophysics Data System (ADS)
Ghods, M.; Lauer, M.; Upadhyay, S. R.; Grugel, R. N.; Tewari, S. N.; Poirier, D. R.
2018-04-01
Formation of spurious grains during directional solidification (DS) of Al-7 wt.% Si and Al-19 wt.% Cu alloys through an abrupt increase in cross-sectional area has been examined by experiments and by numerical simulations. Stray grains were observed in the Al-19 wt.% Cu samples and almost none in the Al-7 wt.% Si. The locations of the stray grains correlate well where numerical solutions indicate the solute-rich melt to be flowing up the thermal gradient faster than the isotherm velocity. It is proposed that the spurious grain formation occurred by fragmentation of slender tertiary dendrite arms was enhanced by thermosolutal convection. In Al-7 wt.% Si, the dendrite fragments sink in the surrounding melt and get trapped in the dendritic array growing around them, and therefore they do not grow further. In the Al-19 wt.% Cu alloy, on the other hand, the dendrite fragments float in the surrounding melt and some find conducive thermal conditions for further growth and become stray grains.
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
Portable Integrated Wireless Device Threat Assessment to Aircraft Radio Systems
NASA Technical Reports Server (NTRS)
Salud, Maria Theresa P.; Williams, Reuben A. (Technical Monitor)
2004-01-01
An assessment was conducted on multiple wireless local area network (WLAN) devices using the three wireless standards for spurious radiated emissions to determine their threat to aircraft radio navigation systems. The measurement process, data and analysis are provided for devices tested using IEEE 802.11a, IEEE 802.11b, and Bluetooth as well as data from portable laptops/tablet PCs and PDAs (grouping known as PEDs). A comparison was made between wireless LAN devices and portable electronic devices. Spurious radiated emissions were investigated in the radio frequency bands for the following aircraft systems: Instrument Landing System Localizer and Glideslope, Very High Frequency (VHF) Communication, VHF Omnidirectional Range, Traffic Collision Avoidance System, Air Traffic Control Radar Beacon System, Microwave Landing System and Global Positioning System. Since several of the contiguous navigation systems were grouped under one encompassing measurement frequency band, there were five measurement frequency bands where spurious radiated emissions data were collected for the PEDs and WLAN devices. The report also provides a comparison between emissions data and regulatory emission limit.
Relationship between sampling volume of primary serum tubes and spurious hemolysis.
Lippi, Giuseppe; Musa, Roberta; Battistelli, Luisita; Cervellin, Gianfranco
2012-01-01
We planned a study to establish whether spurious hemolysis may be present in low volume tubes or partially filled tubes. Four serum tubes were collected in sequence from 20 healthy volunteers, i.e., 4.0 mL, 13 x 75 mm (discard tube), 6.0 mL, 13 x 100 mm half-filled, 4.0 mL, 13 x 75 mm full-draw and 6.0 mL, 13 x 100 mm full-draw. Serum was separated and immediately tested for hemolysis index (HI), potassium, aspartate aminotransferase (AST), and lactate dehydrogenase (LDH). The HI always remained below the limit of detection of the method (< 0.5 g/L) in all tubes. No statistically significant differences were recorded in any parameter except potassium, which increased by 0.10 mmol/L in 4 mL full-draw tubes. No clinically significant variation was however recorded in any tube. The results suggest that all types of tubes tested might be used interchangeably in term of risk of spurious hemolysis.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
Intragenic DNA methylation prevents spurious transcription initiation.
Neri, Francesco; Rapelli, Stefania; Krepelova, Anna; Incarnato, Danny; Parlato, Caterina; Basile, Giulia; Maldotti, Mara; Anselmi, Francesca; Oliviero, Salvatore
2017-03-02
In mammals, DNA methylation occurs mainly at CpG dinucleotides. Methylation of the promoter suppresses gene expression, but the functional role of gene-body DNA methylation in highly expressed genes has yet to be clarified. Here we show that, in mouse embryonic stem cells, Dnmt3b-dependent intragenic DNA methylation protects the gene body from spurious RNA polymerase II entry and cryptic transcription initiation. Using different genome-wide approaches, we demonstrate that this Dnmt3b function is dependent on its enzymatic activity and recruitment to the gene body by H3K36me3. Furthermore, the spurious transcripts can either be degraded by the RNA exosome complex or capped, polyadenylated, and delivered to the ribosome to produce aberrant proteins. Elongating RNA polymerase II therefore triggers an epigenetic crosstalk mechanism that involves SetD2, H3K36me3, Dnmt3b and DNA methylation to ensure the fidelity of gene transcription initiation, with implications for intragenic hypomethylation in cancer.
An Investigation and Interpretation of Selected Topics in Uncertainty Reasoning
1989-12-01
Characterizing seconditry uncertainty as spurious evidence and including it in the inference process , It was shown that probability ratio graphs are a...in the inference process has great impact on the computational complexity of an Inference process . viii An Investigation and Interpretation of...Systems," he outlines a five step process that incorporates Blyeslan reasoning in the development of the expert system rule base: 1. A group of
A Multi Agent System for Flow-Based Intrusion Detection
2013-03-01
Student t-test, as it is less likely to spuriously indicate significance because of the presence of outliers [128]. We use the MATLAB ranksum function [77...effectiveness of self-organization and “ entangled hierarchies” for accomplishing scenario objectives. One of the interesting features of SOMAS is the ability...cross-validation and automatic model selection. It has interfaces for Java, Python, R, Splus, MATLAB , Perl, Ruby, and LabVIEW. Kernels: linear
Coval: Improving Alignment Quality and Variant Calling Accuracy for Next-Generation Sequencing Data
Kosugi, Shunichi; Natsume, Satoshi; Yoshida, Kentaro; MacLean, Daniel; Cano, Liliana; Kamoun, Sophien; Terauchi, Ryohei
2013-01-01
Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in ‘targeted’ alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/. PMID:24116042
Linear stability analysis of collective neutrino oscillations without spurious modes
NASA Astrophysics Data System (ADS)
Morinaga, Taiki; Yamada, Shoichi
2018-01-01
Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.
Communication: An accurate global potential energy surface for the ground electronic state of ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawes, Richard, E-mail: dawesr@mst.edu, E-mail: hguo@unm.edu; Lolur, Phalgun; Li, Anyang
We report a new full-dimensional and global potential energy surface (PES) for the O + O{sub 2} → O{sub 3} ozone forming reaction based on explicitly correlated multireference configuration interaction (MRCI-F12) data. It extends our previous [R. Dawes, P. Lolur, J. Ma, and H. Guo, J. Chem. Phys. 135, 081102 (2011)] dynamically weighted multistate MRCI calculations of the asymptotic region which showed the widely found submerged reef along the minimum energy path to be the spurious result of an avoided crossing with an excited state. A spin-orbit correction was added and the PES tends asymptotically to the recently developed long-rangemore » electrostatic model of Lepers et al. [J. Chem. Phys. 137, 234305 (2012)]. This PES features: (1) excellent equilibrium structural parameters, (2) good agreement with experimental vibrational levels, (3) accurate dissociation energy, and (4) most-notably, a transition region without a spurious reef. The new PES is expected to allow insight into the still unresolved issues surrounding the kinetics, dynamics, and isotope signature of ozone.« less
Explaining the Relationship between Employment and Juvenile Delinquency.
Staff, Jeremy; Osgood, D Wayne; Schulenberg, John E; Bachman, Jerald G; Messersmith, Emily E
2010-11-28
Most criminological theories predict an inverse relationship between employment and crime, but teenagers' involvement in paid work during the school year is positively correlated with delinquency and substance use. Whether the work-delinquency association is causal or spurious has long been debated. This study estimates the effect of paid work on juvenile delinquency using longitudinal data from the national Monitoring the Future project. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in juvenile delinquency and substance use to those in paid work and other explanatory variables. We also disentangle effects of actual employment from preferences for employment to provide insight about the likely role of time-varying selection factors tied to employment, delinquency, school engagement, and leisure activities. Whereas causal effects of employment would produce differences based on whether and how many hours respondents worked, we found significantly higher rates of crime and substance use among non-employed youth who preferred intensive versus moderate work. Our findings suggest the relationship between high-intensity work and delinquency results from preexisting factors that lead youth to desire varying levels of employment.
Testing Gene-Gene Interactions in the Case-Parents Design
Yu, Zhaoxia
2011-01-01
The case-parents design has been widely used to detect genetic associations as it can prevent spurious association that could occur in population-based designs. When examining the effect of an individual genetic locus on a disease, logistic regressions developed by conditioning on parental genotypes provide complete protection from spurious association caused by population stratification. However, when testing gene-gene interactions, it is unknown whether conditional logistic regressions are still robust. Here we evaluate the robustness and efficiency of several gene-gene interaction tests that are derived from conditional logistic regressions. We found that in the presence of SNP genotype correlation due to population stratification or linkage disequilibrium, tests with incorrectly specified main-genetic-effect models can lead to inflated type I error rates. We also found that a test with fully flexible main genetic effects always maintains correct test size and its robustness can be achieved with negligible sacrifice of its power. When testing gene-gene interactions is the focus, the test allowing fully flexible main effects is recommended to be used. PMID:21778736
My, T-H; Robin, O; Mhibik, O; Drag, C; Bretenaker, F
2009-03-30
The evolution of the spectrum of a singly resonant optical parametric oscillator based on an MgO-doped periodically poled stoichiometric lithium tantalate crystal is observed when the pump power is varied. The onset of cascade Raman lasing due to stimulated Raman scattering in the nonlinear crystal is analyzed. Spurious frequency doubling and sum-frequency generation phenomena are observed and understood. A strong reduction of the intracavity Raman scattering is obtained by a careful adjustment of the cavity losses.
Some Aspects of Nonlinear Dynamics and CFD
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Merriam, Marshal (Technical Monitor)
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with examples of spurious behavior observed in CFD computations.
Consideration of Dynamical Balances
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
The quasi-balance of extra-tropical tropospheric dynamics is a fundamental aspect of nature. If an atmospheric analysis does not reflect such balance sufficiently well, the subsequent forecast will exhibit unrealistic behavior associated with spurious fast-propagating gravity waves. Even if these eventually damp, they can create poor background fields for a subsequent analysis or interact with moist physics to create spurious precipitation. The nature of this problem will be described along with the reasons for atmospheric balance and techniques for mitigating imbalances. Attention will be focused on fundamental issues rather than on recipes for various techniques.
Godolphin, W; Cameron, E C; Frohlich, J; Price, J D
1979-02-01
Patients on long-term hemodialysis via arteriovenous fistula received heparin when the fistula needle was inserted, before a sample of blood was obtained for chemical analysis. The resultant release of lipoprotein lipase activity in vivo and continued lipolytic activity in vitro sometimes produced sufficient free fatty acid to precipitate calcium soaps. The consequent spurious hypocalcemia was most frequently observed when the patients had chylomicronemia. This cause of apparent hypocalcemia was eliminated either by immediate analyses of the blood samples or by obtaining samples before systemic heparinization.
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.
1980-01-01
The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.
Optical Add-Drop Filters Based on Photonic Crystal Ring Resonators
2007-02-19
34 Appl. Phys. Lett. 81,2499-2501 (2002). 17. V. Dinesh Kumar , T. Srinivas, A. Selvarajan, "Investigation of ring resonators in photonic crystal...No.4 / opncs EXPRESS 1824 Kumar et al. [17], where a large single quasi-rectangular ring was introduced as the frequency selective dropping elements...were introduced by Kumar et al. as well, in order to suppress the counter propagating modes which can cause spurious dips in the transmission spectrum
Fretellier, Nathalie; Poteau, Nathalie; Factor, Cécile; Mayer, Jean-François; Medina, Christelle; Port, Marc; Idée, Jean-Marc; Corot, Claire
2014-01-01
Objectives The purposes of this study were to evaluate the risk for analytical interference with gadolinium-based contrast agents (GBCAs) for the colorimetric measurement of serum iron (Fe3+) and to investigate the mechanisms involved. Materials and Methods Rat serum was spiked with several concentrations of all molecular categories of GBCAs, ligands, or “free” soluble gadolinium (Gd3+). Serum iron concentration was determined by 2 different colorimetric methods at pH 4.0 (with a Vitros DT60 analyzer or a Cobas Integra 400 analyzer). Secondly, the cause of interference was investigated by (a) adding free soluble Gd3+ or Mn2+ to serum in the presence of gadobenic acid or gadodiamide and (b) electrospray ionization mass spectrometry. Results Spurious decrease in serum Fe3+ concentration was observed with all linear GBCAs (only with the Vitros DT60 technique occurring at pH 4.0) but not with macrocyclic GBCAs or with free soluble Gd3+. Spurious hyposideremia was also observed with the free ligands present in the pharmaceutical solutions of the linear GBCAs gadopentetic acid and gadodiamide (ie, diethylene triamine pentaacetic acid and calcium-diethylene triamine pentaacetic acid bismethylamide, respectively), suggesting the formation of Fe-ligand chelate. Gadobenic acid-induced interference was blocked in a concentration-dependent fashion by adding a free soluble Gd3+ salt. Conversely, Mn2+, which has a lower affinity than Gd3+ and Fe3+ for the ligand of gadobenic acid (ie, benzyloxypropionic diethylenetriamine tetraacetic acid), was less effective (interference was only partially blocked), suggesting an Fe3+ versus Gd3+ transmetallation phenomenon at pH 4.0. Similar results were observed with gadodiamide. Mass spectrometry detected the formation of Fe-ligand with all linear GBCAs tested in the presence of Fe3+ and the disappearance of Fe-ligand after the addition of free soluble Gd3+. No Fe-ligand chelate was found in the case of the macrocyclic GBCA gadoteric acid. Conclusions Macrocyclic GBCAs induced no interference with colorimetric methods for iron determination, whereas negative interference was observed with linear GBCAs using a Vitros DT60 analyzer. This interference of linear GBCAs seems to be caused by the excess of ligand and/or an Fe3+ versus Gd3+ transmetallation phenomenon. PMID:24943092
NASA Technical Reports Server (NTRS)
Rapp, R.
1999-01-01
An expansion of a function initially given in 1deg cells was carried out to degree 360 by using 30'cells whose value was initially assigned to be the value of the 1deg cell in which it fell. The evaluation of point values of the function from the degree 360 expansion revealed spurious patterns attributed to the coefficients from degree 181 to 360. Expansion of the original function in 1deg cells to degree 180 showed no problems in the point evaluation. Mean 1deg values computed from both degree 180 to 360 expansions showed close agreement with the original function. The artifacts could be removed if the 30' values were interpolated by spline procedures from adjacent I' cells. These results led to an examination of the gravity anomalies and geoid undulations from EGM96 in areas where I' values were "split up" to form 30'cells. The area considered was 75degS to 85degS, 100degE to 120degE where the split up cells were basically south of 81 degS. A small, latitude related, and possibly spurious effect might be detectable in anomaly variations in the region. These results suggest that point values of a function computed from a high degree expansion may have spurious signals unless the cell size is compatible with the maximum degree of expansion. The spurious signals could be eliminated by using a spline interpolation procedure to obtain the 30'values from the 1deg values.
Bandpass mismatch error for satellite CMB experiments I: estimating the spurious signal
NASA Astrophysics Data System (ADS)
Thuong Hoang, Duc; Patanchon, Guillaume; Bucher, Martin; Matsumura, Tomotake; Banerji, Ranajoy; Ishino, Hirokazu; Hazumi, Masashi; Delabrouille, Jacques
2017-12-01
Future Cosmic Microwave Background (CMB) satellite missions aim to use the B mode polarization to measure the tensor-to-scalar ratio r with a sensitivity σr lesssim 10-3. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent in CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious B mode signal. Because the expected primordial B mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring r. Using simulations we estimate the magnitude and angular spectrum of the spurious B mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces differences in the relative calibration of galactic emission components. We simulate this effect using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to r from the reionization bump on large angular scales (l < 10) is ≈ 10-3 assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the nonuniformity of the angular coverage in each pixel that results from the scan pattern.
Spurious sea ice formation caused by oscillatory ocean tracer advection schemes
NASA Astrophysics Data System (ADS)
Naughten, Kaitlin A.; Galton-Fenzi, Benjamin K.; Meissner, Katrin J.; England, Matthew H.; Brassington, Gary B.; Colberg, Frank; Hattermann, Tore; Debernard, Jens B.
2017-08-01
Tracer advection schemes used by ocean models are susceptible to artificial oscillations: a form of numerical error whereby the advected field alternates between overshooting and undershooting the exact solution, producing false extrema. Here we show that these oscillations have undesirable interactions with a coupled sea ice model. When oscillations cause the near-surface ocean temperature to fall below the freezing point, sea ice forms for no reason other than numerical error. This spurious sea ice formation has significant and wide-ranging impacts on Southern Ocean simulations, including the disappearance of coastal polynyas, stratification of the water column, erosion of Winter Water, and upwelling of warm Circumpolar Deep Water. This significantly limits the model's suitability for coupled ocean-ice and climate studies. Using the terrain-following-coordinate ocean model ROMS (Regional Ocean Modelling System) coupled to the sea ice model CICE (Community Ice CodE) on a circumpolar Antarctic domain, we compare the performance of three different tracer advection schemes, as well as two levels of parameterised diffusion and the addition of flux limiters to prevent numerical oscillations. The upwind third-order advection scheme performs better than the centered fourth-order and Akima fourth-order advection schemes, with far fewer incidents of spurious sea ice formation. The latter two schemes are less problematic with higher parameterised diffusion, although some supercooling artifacts persist. Spurious supercooling was eliminated by adding flux limiters to the upwind third-order scheme. We present this comparison as evidence of the problematic nature of oscillatory advection schemes in sea ice formation regions, and urge other ocean/sea-ice modellers to exercise caution when using such schemes.
NASA Astrophysics Data System (ADS)
Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing
2018-03-01
Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.
Silent inflow condition for turbulent boundary layers
NASA Astrophysics Data System (ADS)
Gloerfelt, X.; Robinet, J.-C.
2017-12-01
The generation of a turbulent inflow is a tricky problem. In the framework of aeroacoustics, another important constraint is that the numerical strategy used to reach a turbulent state induces a spurious noise which is lower than the acoustic field of interest. For the study of noise radiated directly by a turbulent boundary layer on a flat plate, this constraint is severe since wall turbulence is a very inefficient source. That is why a method based on a transition by modal interaction using a base flow with an inflection point is proposed to cope with that. The base flow must be a solution of the equations so we use a profile behind a backward-facing step representative of experimental trip bands. A triad of resonant waves is selected by a local stability analysis of the linearized compressible equations and is added with a weak amplitude in the inlet plane. The compressible stability calculation allows the specification of the thermodynamic quantities at the inlet, which turns out to be fundamental to ensure a quiet inflow. A smooth transition is achieved with the rapid formation of Λ -shape vortices in a staggered organization as in subharmonic transition. The dominance of oblique waves promotes a rapid breakdown by the liftup mechanism of low-speed streaks. The quality of the fully turbulent state is assessed and the direct noise radiation from a turbulent boundary layer at Mach 0.5 is obtained with a very low level of spurious noise.
NASA Astrophysics Data System (ADS)
Haines, B. J.; Bar-Sever, Y. E.; Bertiger, W.; Desai, S.; Owen, S.; Sibois, A.; Webb, F.
2007-12-01
Treating the GRACE tandem mission as an orbiting fiducial laboratory, we have developed new estimates of the phase and group-delay variations of the GPS transmitter antennas. Application of these antenna phase variation (APV) maps have shown great promise in reducing previously unexplained errors in our realization of GPS measurements from the TOPEX/POSEIDON (T/P; 1992--2005) and Jason-1 (2001--) missions. In particular, a 56 mm vertical offset in the solved-for position of the T/P receiver antenna is reduced to insignificance (less than 1 mm). For Jason-1, a spurious long-term (4-yr) drift in the daily antenna offset estimates is reduced from +3.7 to +0.1 mm/yr. Prior ground-based results, based on precise point positioning, also hint at the potential of the GRACE-based APV maps for scale determination, reducing the spurious scale rate by one half. In this paper, we report on the latest APV estimates from GRACE, and provide a further assessment of the impact of the APV maps on realizing the scale of the terrestrial reference frame (TRF) from GPS alone. To address this, we re-analyze over five years of data from a global (40+ station) ground network in a fiducial-free approach, using the new APV maps. A specialized multi-day GPS satellite orbit determination (OD) strategy is employed to better capitalize on dynamical constraints. The resulting estimates of TRF scale are compared to ITRF2005 in order to assess the quality of the solutions.
The introduction of spurious models in a hole-coupled Fabry-Perot open resonator
NASA Technical Reports Server (NTRS)
Cook, Jerry D.; Long, Kenwyn J.; Heinen, Vernon O.; Stankiewicz, Norbert
1992-01-01
A hemispherical open resonator has previously been used to make relative comparisons of the surface resistivity of metallic thin-film samples in the submillimeter wavelength region. This resonator is fed from a far-infrared laser via a small coupling hole in the center of the concave spherical mirror. The experimental arrangement, while desirable as a coupling geometry for monitoring weak emissions from the cavity, can lead to the introduction of spurious modes into the cavity. Sources of these modes are identified, and a simple alteration of the experimental apparatus to eliminate such modes is suggested.
Parkes radio science system design and testing for Voyager Neptune encounter
NASA Technical Reports Server (NTRS)
Rebold, T. A.; Weese, J. F.
1989-01-01
The Radio Science System installed at Parkes, Australia for the Voyager Neptune encounter was specified to meet the same stringent requirements that were imposed upon the Deep Space Network Radio Science System. The system design and test methodology employed to meet these requirements at Parkes are described, and data showing the measured performance of the system are presented. The results indicate that the system operates with a comfortable margin on the requirements. There was a minor problem with frequency-dependent spurious signals which could not be fixed before the encounter. Test results characterizing these spurious signals are included.
2014-01-01
Expression quantitative trait loci (eQTL) mapping is a tool that can systematically identify genetic variation affecting gene expression. eQTL mapping studies have shown that certain genomic locations, referred to as regulatory hotspots, may affect the expression levels of many genes. Recently, studies have shown that various confounding factors may induce spurious regulatory hotspots. Here, we introduce a novel statistical method that effectively eliminates spurious hotspots while retaining genuine hotspots. Applied to simulated and real datasets, we validate that our method achieves greater sensitivity while retaining low false discovery rates compared to previous methods. PMID:24708878
Seeing in the dark - I. Multi-epoch alchemy
NASA Astrophysics Data System (ADS)
Huff, Eric M.; Hirata, Christopher M.; Mandelbaum, Rachel; Schlegel, David; Seljak, Uroš; Lupton, Robert H.
2014-05-01
Weak lensing by large-scale structure is an invaluable cosmological tool given that most of the energy density of the concordance cosmology is invisible. Several large ground-based imaging surveys will attempt to measure this effect over the coming decade, but reliable control of the spurious lensing signal introduced by atmospheric turbulence and telescope optics remains a challenging problem. We address this challenge with a demonstration that point spread function (PSF) effects on measured galaxy shapes in the Sloan Digital Sky Survey (SDSS) can be corrected with existing analysis techniques. In this work, we co-add existing SDSS imaging on the equatorial stripe in order to build a data set with the statistical power to measure cosmic shear, while using a rounding kernel method to null out the effects of the anisotropic PSF. We build a galaxy catalogue from the combined imaging, characterize its photometric properties and show that the spurious shear remaining in this catalogue after the PSF correction is negligible compared to the expected cosmic shear signal. We identify a new source of systematic error in the shear-shear autocorrelations arising from selection biases related to masking. Finally, we discuss the circumstances in which this method is expected to be useful for upcoming ground-based surveys that have lensing as one of the science goals, and identify the systematic errors that can reduce its efficacy.
Dean, Roger T; Dunsmuir, William T M
2016-06-01
Many articles on perception, performance, psychophysiology, and neuroscience seek to relate pairs of time series through assessments of their cross-correlations. Most such series are individually autocorrelated: they do not comprise independent values. Given this situation, an unfounded reliance is often placed on cross-correlation as an indicator of relationships (e.g., referent vs. response, leading vs. following). Such cross-correlations can indicate spurious relationships, because of autocorrelation. Given these dangers, we here simulated how and why such spurious conclusions can arise, to provide an approach to resolving them. We show that when multiple pairs of series are aggregated in several different ways for a cross-correlation analysis, problems remain. Finally, even a genuine cross-correlation function does not answer key motivating questions, such as whether there are likely causal relationships between the series. Thus, we illustrate how to obtain a transfer function describing such relationships, informed by any genuine cross-correlations. We illustrate the confounds and the meaningful transfer functions by two concrete examples, one each in perception and performance, together with key elements of the R software code needed. The approach involves autocorrelation functions, the establishment of stationarity, prewhitening, the determination of cross-correlation functions, the assessment of Granger causality, and autoregressive model development. Autocorrelation also limits the interpretability of other measures of possible relationships between pairs of time series, such as mutual information. We emphasize that further complexity may be required as the appropriate analysis is pursued fully, and that causal intervention experiments will likely also be needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Petri, Andrea; May, Morgan
Weak gravitational lensing causes subtle changes in the apparent shapes of galaxies due to the bending of light by the gravity of foreground masses. By measuring the shapes of large numbers of galaxies (millions in recent surveys, up to tens of billions in future surveys) we can infer the parameters that determine cosmology. Imperfections in the detectors used to record images of the sky can introduce changes in the apparent shape of galaxies, which in turn can bias the inferred cosmological parameters. Here in this paper we consider the effect of two widely discussed sensor imperfections: tree-rings, due to impuritymore » gradients which cause transverse electric fields in the Charge-Coupled Devices (CCD), and pixel-size variation, due to periodic CCD fabrication errors. These imperfections can be observed when the detectors are subject to uniform illumination (flat field images). We develop methods to determine the spurious shear and convergence (due to the imperfections) from the flat-field images. We calculate how the spurious shear when added to the lensing shear will bias the determination of cosmological parameters. We apply our methods to candidate sensors of the Large Synoptic Survey Telescope (LSST) as a timely and important example, analyzing flat field images recorded with LSST prototype CCDs in the laboratory. In conclusion, we find that tree-rings and periodic pixel-size variation present in the LSST CCDs will introduce negligible bias to cosmological parameters determined from the lensing power spectrum, specifically w,Ω m and σ 8.« less
Okura, Yuki; Petri, Andrea; May, Morgan; ...
2016-06-27
Weak gravitational lensing causes subtle changes in the apparent shapes of galaxies due to the bending of light by the gravity of foreground masses. By measuring the shapes of large numbers of galaxies (millions in recent surveys, up to tens of billions in future surveys) we can infer the parameters that determine cosmology. Imperfections in the detectors used to record images of the sky can introduce changes in the apparent shape of galaxies, which in turn can bias the inferred cosmological parameters. Here in this paper we consider the effect of two widely discussed sensor imperfections: tree-rings, due to impuritymore » gradients which cause transverse electric fields in the Charge-Coupled Devices (CCD), and pixel-size variation, due to periodic CCD fabrication errors. These imperfections can be observed when the detectors are subject to uniform illumination (flat field images). We develop methods to determine the spurious shear and convergence (due to the imperfections) from the flat-field images. We calculate how the spurious shear when added to the lensing shear will bias the determination of cosmological parameters. We apply our methods to candidate sensors of the Large Synoptic Survey Telescope (LSST) as a timely and important example, analyzing flat field images recorded with LSST prototype CCDs in the laboratory. In conclusion, we find that tree-rings and periodic pixel-size variation present in the LSST CCDs will introduce negligible bias to cosmological parameters determined from the lensing power spectrum, specifically w,Ω m and σ 8.« less
Patel, Harilal; Patel, Prakash; Modi, Nirav; Shah, Shaival; Ghoghari, Ashok; Variya, Bhavesh; Laddha, Ritu; Baradia, Dipesh; Dobaria, Nitin; Mehta, Pavak; Srinivas, Nuggehally R
2017-08-30
Because of the avoidance of first pass metabolic effects due to direct and rapid absorption with improved permeability, intranasal route represents a good alternative for extravascular drug administration. The aim of the study was to investigate the intranasal pharmacokinetics of two anti-migraine drugs (zolmitriptan and eletriptan), using retro-orbital sinus and jugular vein sites sampling. In a parallel study design, healthy male Sprague-Dawley (SD) rats aged between 8 and 12weeks were divided into groups (n=4 or 5/group). The animals of individual groups were dosed intranasal (~1.0mg/kg) and oral doses of 2.1mg/kg of either zolmitriptan or eletriptan. Serial blood sampling was performed from jugular vein or retro-orbital site and plasma samples were analyzed for drug concentrations using LC-MS/MS assay. Standard pharmacokinetics parameters such as T max , C max , AUC last , AUC 0-inf and T 1/2 were calculated and statistics of derived parameters was performed using unpaired t-test. After intranasal dosing, the mean pharmacokinetic parameters C max and AUC inf of zolmitriptan/eletriptan showed about 17-fold and 3-5-fold higher values for retro-orbital sampling as compared to the jugular vein sampling site. Whereas after oral administration such parameters derived for both drugs were largely comparable between the two sampling sites and statistically non-significant. In conclusion, the assessment of plasma levels after intranasal administration with retro-orbital sampling would result in spurious and misleading pharmacokinetics. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Jerolmack, D. J.
2017-12-01
Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.
Time-dependent density functional theory with twist-averaged boundary conditions
NASA Astrophysics Data System (ADS)
Schuetrumpf, B.; Nazarewicz, W.; Reinhard, P.-G.
2016-05-01
Background: Time-dependent density functional theory is widely used to describe excitations of many-fermion systems. In its many applications, three-dimensional (3D) coordinate-space representation is used, and infinite-domain calculations are limited to a finite volume represented by a spatial box. For finite quantum systems (atoms, molecules, nuclei, hadrons), the commonly used periodic or reflecting boundary conditions introduce spurious quantization of the continuum states and artificial reflections from boundary; hence, an incorrect treatment of evaporated particles. Purpose: The finite-volume artifacts for finite systems can be practically cured by invoking an absorbing potential in a certain boundary region sufficiently far from the described system. However, such absorption cannot be applied in the calculations of infinite matter (crystal electrons, quantum fluids, neutron star crust), which suffer from unphysical effects stemming from a finite computational box used. Here, twist-averaged boundary conditions (TABC) have been used successfully to diminish the finite-volume effects. In this work, we extend TABC to time-dependent modes. Method: We use the 3D time-dependent density functional framework with the Skyrme energy density functional. The practical calculations are carried out for small- and large-amplitude electric dipole and quadrupole oscillations of 16O. We apply and compare three kinds of boundary conditions: periodic, absorbing, and twist-averaged. Results: Calculations employing absorbing boundary conditions (ABC) and TABC are superior to those based on periodic boundary conditions. For low-energy excitations, TABC and ABC variants yield very similar results. With only four twist phases per spatial direction in TABC, one obtains an excellent reduction of spurious fluctuations. In the nonlinear regime, one has to deal with evaporated particles. In TABC, the floating nucleon gas remains in the box; the amount of nucleons in the gas is found to be roughly the same as the number of absorbed particles in ABC. Conclusion: We demonstrate that by using TABC, one can reduce finite-volume effects drastically without adding any additional parameters associated with absorption at large distances. Moreover, TABC are an obvious choice for time-dependent calculations for infinite systems. Since TABC calculations for different twists can be performed independently, the method is trivially adapted to parallel computing.
Johnson, Eric O; Hancock, Dana B; Levy, Joshua L; Gaddis, Nathan C; Saccone, Nancy L; Bierut, Laura J; Page, Grier P
2013-05-01
A great promise of publicly sharing genome-wide association data is the potential to create composite sets of controls. However, studies often use different genotyping arrays, and imputation to a common set of SNPs has shown substantial bias: a problem which has no broadly applicable solution. Based on the idea that using differing genotyped SNP sets as inputs creates differential imputation errors and thus bias in the composite set of controls, we examined the degree to which each of the following occurs: (1) imputation based on the union of genotyped SNPs (i.e., SNPs available on one or more arrays) results in bias, as evidenced by spurious associations (type 1 error) between imputed genotypes and arbitrarily assigned case/control status; (2) imputation based on the intersection of genotyped SNPs (i.e., SNPs available on all arrays) does not evidence such bias; and (3) imputation quality varies by the size of the intersection of genotyped SNP sets. Imputations were conducted in European Americans and African Americans with reference to HapMap phase II and III data. Imputation based on the union of genotyped SNPs across the Illumina 1M and 550v3 arrays showed spurious associations for 0.2 % of SNPs: ~2,000 false positives per million SNPs imputed. Biases remained problematic for very similar arrays (550v1 vs. 550v3) and were substantial for dissimilar arrays (Illumina 1M vs. Affymetrix 6.0). In all instances, imputing based on the intersection of genotyped SNPs (as few as 30 % of the total SNPs genotyped) eliminated such bias while still achieving good imputation quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kipping, David M.; Chen, Jingjing; Sandford, Emily
The analysis of Proxima Centauri’s radial velocities recently led Anglada-Escudé et al. to claim the presence of a low-mass planet orbiting the Sun’s nearest star once every 11.2 days. Although the a priori probability that Proxima b transits its parent star is just 1.5%, the potential impact of such a discovery would be considerable. Independent of recent radial velocity efforts, we observed Proxima Centauri for 12.5 days in 2014 and 31 days in 2015 with the Microwave and Oscillations of Stars space telescope. We report here that we cannot make a compelling case that Proxima b transits in our precisemore » photometric time series. Imposing an informative prior on the period and phase, we do detect a candidate signal with the expected depth. However, perturbing the phase prior across 100 evenly spaced intervals reveals one strong false positive and one weaker instance. We estimate a false-positive rate of at least a few percent and a much higher false-negative rate of 20%–40%, likely caused by the very high flare rate of Proxima Centauri. Comparing our candidate signal to HATSouth ground-based photometry reveals that the signal is somewhat, but not conclusively, disfavored (1 σ –2 σ ), leading us to argue that the signal is most likely spurious. We expect that infrared photometric follow-up could more conclusively test the existence of this candidate signal, owing to the suppression of flare activity and the impressive infrared brightness of the parent star.« less
Modeling the Oxygen K Absorption in the Interstellar Medium: An XMM-Newton View of Sco X-1
NASA Technical Reports Server (NTRS)
Garcia, J.; Ramirez, J. M.; Kallman, T. R.; Witthoeft, M.; Bautista, M. A.; Mendoza, C.; Palmeri, P.; Quinet, P.
2011-01-01
We investigate the absorption structure of the oxygen in the interstellar medium by analyzing XMM-Newton observations of the low mass X-ray binary Sco X-1. We use simple models based on the O I atomic cross section from different sources to fit the data and evaluate the impact of the atomic data in the interpretation of astrophysical observations. We show that relatively small differences in the atomic calculations can yield spurious results. We also show that the most complete and accurate set of atomic cross sections successfully reproduce the observed data in the 21 - 24.5 Angstrom wavelength region of the spectrum. Our fits indicate that the absorption is mainly due to neutral gas with an ionization parameter of Epsilon = 10(exp -4) erg/sq cm, and an oxygen column density of N(sub O) approx. = 8-10 x 10(exp 17)/sq cm. Our models are able to reproduce both the K edge and the K(alpha) absorption line from O I, which are the two main features in this region. We find no conclusive evidence for absorption by other than atomic oxygen.
Genetic Heterogeneity of Self-Reported Ancestry Groups in an Admixed Brazilian Population
Lins, Tulio C; Vieira, Rodrigo G; Abreu, Breno S; Gentil, Paulo; Moreno-Lima, Ricardo; Oliveira, Ricardo J; Pereira, Rinaldo W
2011-01-01
Background Population stratification is the main source of spurious results and poor reproducibility in genetic association findings. Population heterogeneity can be controlled for by grouping individuals in ethnic clusters; however, in admixed populations, there is evidence that such proxies do not provide efficient stratification control. The aim of this study was to evaluate the relation of self-reported with genetic ancestry and the statistical risk of grouping an admixed sample based on self-reported ancestry. Methods A questionnaire that included an item on self-reported ancestry was completed by 189 female volunteers from an admixed Brazilian population. Individual genetic ancestry was then determined by genotyping ancestry informative markers. Results Self-reported ancestry was classified as white, intermediate, and black. The mean difference among self-reported groups was significant for European and African, but not Amerindian, genetic ancestry. Pairwise fixation index analysis revealed a significant difference among groups. However, the increase in the chance of type 1 error was estimated to be 14%. Conclusions Self-reporting of ancestry was not an appropriate methodology to cluster groups in a Brazilian population, due to high variance at the individual level. Ancestry informative markers are more useful for quantitative measurement of biological ancestry. PMID:21498954
Correction of Population Stratification in Large Multi-Ethnic Association Studies
Serre, David; Montpetit, Alexandre; Paré, Guillaume; Engert, James C.; Yusuf, Salim; Keavney, Bernard; Hudson, Thomas J.; Anand, Sonia
2008-01-01
Background The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification. Methodology/Principal Findings We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification. Conclusions/Significance Our results highlight the importance of carefully addressing population stratification and of carefully “cleaning” the sample prior to analyses to obtain stronger signals of association and to avoid spurious results. PMID:18196181
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
Comments on using absolute spectrophotometry of Wolf-Rayet stars
NASA Technical Reports Server (NTRS)
Underhill, A. B.
1986-01-01
Garmany et al. (1984) have conducted a study involving spectrophotometric scans of 13 Wolf-Rayet stars. They have found that the application of a 'standard' reddening law to the observed data gives spurious results in many cases. They concluded also that previous attempts to determine the intrinsic continua and the effective temperatures of Wolf-Rayet stars are inadequate. In the present study the conclusions of Garmany et al. are evaluated. According to this evaluation, it has not been demonstrated by Garmany et al., beyond a reasonble doubt, that the interstellar extinction law varies greatly from Wolf-Rayet star to Wolf-Rayet star. The procedure followed by Garmany et al. to find the apparent shape of the ultraviolet continuum of a Wolf-Rayet star is unsatisfactory for a number of reasons.
Community detection in complex networks using link prediction
NASA Astrophysics Data System (ADS)
Cheng, Hui-Min; Ning, Yi-Zi; Yin, Zhao; Yan, Chao; Liu, Xin; Zhang, Zhong-Yuan
2018-01-01
Community detection and link prediction are both of great significance in network analysis, which provide very valuable insights into topological structures of the network from different perspectives. In this paper, we propose a novel community detection algorithm with inclusion of link prediction, motivated by the question whether link prediction can be devoted to improving the accuracy of community partition. For link prediction, we propose two novel indices to compute the similarity between each pair of nodes, one of which aims to add missing links, and the other tries to remove spurious edges. Extensive experiments are conducted on benchmark data sets, and the results of our proposed algorithm are compared with two classes of baselines. In conclusion, our proposed algorithm is competitive, revealing that link prediction does improve the precision of community detection.
Independent bases on the spatial wavefunction of four-identical-particle systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Shuyuan; Deng, Zhixuan; Chen, Hong
2013-12-15
We construct the independent bases on the spatial wavefunction of four-identical-particle systems classified under the rotational group SO(3) and the permutation group S{sub 4} with the usage of transformation coefficients that relate wavefunctions described in one set of internal coordinates with those in another. The basis functions for N⩽ 2 are presented in the explicit expressions based on the harmonic oscillator model. Such independent bases are supposed to play a key role in the construction of the wavefunctions of the five-quark states and the variation calculation of four-body systems. Our prescription avoids the spurious states and can be programmed formore » arbitrary N.« less
On Spurious Numerics in Solving Reactive Equations
NASA Technical Reports Server (NTRS)
Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.
2013-01-01
The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.
Karmon, Anatte; Sheiner, Eyal
2008-06-01
Preeclampsia is a major cause of maternal morbidity, although its precise etiology remains elusive. A number of studies suggest that urinary tract infection (UTI) during the course of gestation is associated with elevated risk for preeclampsia, while others have failed to prove such an association. In our medical center, pregnant women who were exposed to at least one UTI episode during pregnancy were 1.3 times more likely to have mild preeclampsia and 1.8 times more likely to have severe preeclampsia as compared to unexposed women. Our results are based on univariate analyses and are not adjusted for potential confounders. This editorial aims to discuss the relationship between urinary tract infection and preeclampsia, as well as examine the current problems regarding the interpretation of this association. Although the relationship between UTI and preeclampsia has been demonstrated in studies with various designs, carried-out in a variety of settings, the nature of this association is unclear. By taking into account timeline, dose-response effects, treatment influences, and potential confounders, as well as by neutralizing potential biases, future studies may be able to clarify the relationship between UTI and preeclampsia by determining if it is causal, confounded, or spurious.
Explaining the Relationship between Employment and Juvenile Delinquency*
Staff, Jeremy; Osgood, D. Wayne; Schulenberg, John E.; Bachman, Jerald G.; Messersmith, Emily E.
2011-01-01
Most criminological theories predict an inverse relationship between employment and crime, but teenagers' involvement in paid work during the school year is positively correlated with delinquency and substance use. Whether the work-delinquency association is causal or spurious has long been debated. This study estimates the effect of paid work on juvenile delinquency using longitudinal data from the national Monitoring the Future project. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in juvenile delinquency and substance use to those in paid work and other explanatory variables. We also disentangle effects of actual employment from preferences for employment to provide insight about the likely role of time-varying selection factors tied to employment, delinquency, school engagement, and leisure activities. Whereas causal effects of employment would produce differences based on whether and how many hours respondents worked, we found significantly higher rates of crime and substance use among non-employed youth who preferred intensive versus moderate work. Our findings suggest the relationship between high-intensity work and delinquency results from preexisting factors that lead youth to desire varying levels of employment. PMID:21442045
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; ...
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
Kasten, Florian H; Negahbani, Ehsan; Fröhlich, Flavio; Herrmann, Christoph S
2018-05-31
Amplitude modulated transcranial alternating current stimulation (AM-tACS) has been recently proposed as a possible solution to overcome the pronounced stimulation artifact encountered when recording brain activity during tACS. In theory, AM-tACS does not entail power at its modulating frequency, thus avoiding the problem of spectral overlap between brain signal of interest and stimulation artifact. However, the current study demonstrates how weak non-linear transfer characteristics inherent to stimulation and recording hardware can reintroduce spurious artifacts at the modulation frequency. The input-output transfer functions (TFs) of different stimulation setups were measured. Setups included recordings of signal-generator and stimulator outputs and M/EEG phantom measurements. 6 th -degree polynomial regression models were fitted to model the input-output TFs of each setup. The resulting TF models were applied to digitally generated AM-tACS signals to predict the frequency of spurious artifacts in the spectrum. All four setups measured for the study exhibited low-frequency artifacts at the modulation frequency and its harmonics when recording AM-tACS. Fitted TF models showed non-linear contributions significantly different from zero (all p < .05) and successfully predicted the frequency of artifacts observed in AM-signal recordings. Results suggest that even weak non-linearities of stimulation and recording hardware can lead to spurious artifacts at the modulation frequency and its harmonics. These artifacts were substantially larger than alpha-oscillations of a human subject in the MEG. Findings emphasize the need for more linear stimulation devices for AM-tACS and careful analysis procedures, taking into account low-frequency artifacts to avoid confusion with effects of AM-tACS on the brain. Copyright © 2018 Elsevier Inc. All rights reserved.
High dynamic range electric field sensor for electromagnetic pulse detection.
Lin, Che-Yun; Wang, Alan X; Lee, Beom Suk; Zhang, Xingyu; Chen, Ray T
2011-08-29
We design a high dynamic range electric field sensor based on domain inverted electro-optic (E-O) polymer Y-fed directional coupler for electromagnetic wave detection. This electrode-less, all optical, wideband electrical field sensor is fabricated using standard processing for E-O polymer photonic devices. Experimental results demonstrate effective detection of electric field from 16.7V/m to 750KV/m at a frequency of 1GHz, and spurious free measurement range of 70dB.
British media attacks on homeopathy: are they justified?
Vithoulkas, George
2008-04-01
Homeopathy is being attacked by the British media. These attacks draw support from irresponsible and unjustified claims by certain teachers of homeopathy. Such claims include the use of 'dream' and 'imaginative' methods for provings. For prescribing some such teachers attempt to replace the laborious process of matching symptom picture and remedy with spurious theories based on 'signatures', sensations and other methods. Other irresponsible claims have also been made. These "new ideas" risk destroying the principles, theory, and practice of homeopathy.
Defect imaging for plate-like structures using diffuse field.
Hayashi, Takahiro
2018-04-01
Defect imaging utilizing a scanning laser source (SLS) technique produces images of defects in a plate-like structure, as well as spurious images occurring because of resonances and reverberations within the specimen. This study developed defect imaging by the SLS using diffuse field concepts to reduce the intensity of spurious images, by which the energy of flexural waves excited by laser can be estimated. The experimental results in the different frequency bandwidths of excitation waves and in specimens with different attenuation proved that clearer images of defects are obtained in broadband excitation using a chirp wave and in specimens with low attenuation, which produce diffuse fields easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatano, H.; Watanabe, T.
A new system was developed for the reciprocity calibration of acoustic emission transducers in Rayleigh-wave and longitudinal-wave sound fields. In order to reduce interference from spurious waves due to reflections and mode conversions, a large cylindrical block of forged steel was prepared for the transfer medium, and direct and spurious waves were discriminated between on the basis of their arrival times. Frequency characteristics of velocity sensitivity to both the Rayleigh wave and longitudinal wave were determined in the range of 50 kHz{endash}1 MHz by means of electrical measurements without the use of mechanical sound sources or reference transducers. {copyright} {italmore » 1997 Acoustical Society of America.}« less
NASA Astrophysics Data System (ADS)
Yang, Xueming; Wu, Sihan; Xu, Jiangxin; Cao, Bingyang; To, Albert C.
2018-02-01
Although the AIREBO potential can well describe the mechanical and thermal transport of the carbon nanostructures under normal conditions, previous studies have shown that it may overestimate the simulated mechanical properties of carbon nanostructures in extreme strains near fracture. It is still unknown whether such overestimation would also appear in the thermal transport of nanostructrues. In this paper, the mechanical and thermal transport of graphene nanoribbon under extreme deformation conditions are studied by MD simulations using both the original and modified AIREBO potential. Results show that the cutoff function of the original AIREBO potential produces an overestimation on thermal conductivity in extreme strains near fracture stage. Spurious heat conduction behavior appears, e.g., the thermal conductivity of GNRs does not monotonically decrease with increasing strain, and even shows a ;V; shaped reversed and nonphysical trend. Phonon spectrum analysis show that it also results in an artificial blue shift of G peak and phonon stiffening of the optical phonon modes. The correlation between spurious heat conduction behavior and overestimation of mechanical properties near the fracture stage caused by the original AIREBO potential are explored and revealed.
Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.
Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan
2018-10-31
In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
Lu, Deyu
2016-08-05
A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less
NASA Astrophysics Data System (ADS)
Nemati, Maedeh; Shateri Najaf Abady, Ali Reza; Toghraie, Davood; Karimipour, Arash
2018-01-01
The incorporation of different equations of state into single-component multiphase lattice Boltzmann model is considered in this paper. The original pseudopotential model is first detailed, and several cubic equations of state, the Redlich-Kwong, Redlich-Kwong-Soave, and Peng-Robinson are then incorporated into the lattice Boltzmann model. A comparison of the numerical simulation achievements on the basis of density ratios and spurious currents is used for presentation of the details of phase separation in these non-ideal single-component systems. The paper demonstrates that the scheme for the inter-particle interaction force term as well as the force term incorporation method matters to achieve more accurate and stable results. The velocity shifting method is demonstrated as the force term incorporation method, among many, with accuracy and stability results. Kupershtokh scheme also makes it possible to achieve large density ratio (up to 104) and to reproduce the coexistence curve with high accuracy. Significant reduction of the spurious currents at vapor-liquid interface is another observation. High-density ratio and spurious current reduction resulted from the Redlich-Kwong-Soave and Peng-Robinson EOSs, in higher accordance with the Maxwell construction results.
VizieR Online Data Catalog: VLA-COSMOS 3 GHz Large Project (Smolcic+, 2017)
NASA Astrophysics Data System (ADS)
Smolcic, V.; Novak, M.; Bondi, M.; Ciliegi, P.; Mooley, K. P.; Schinnerer, E.; Zamorani, G.; Navarrete, F.; Bourke, S.; Karim, A.; Vardoulaki, E.; Leslie, S.; Delhaize, J.; Carilli, C. L.; Myers, S. T.; Baran, N.; Delvecchio, I.; Miettinen, O.; Banfield, J.; Balokovic, M.; Bertoldi, F.; Capak, P.; Frail, D. A.; Hallinan, G.; Hao, H.; Herrera Ruiz, N.; Horesh, A.; Ilbert, O.; Intema, H.; Jelic, V.; Klockner, H.-R.; Krpan, J.; Kulkarni, S. R.; McCracken, H.; Laigle, C.; Middleberg, E.; Murphy, E.; Sargent, M.; Scoville, N. Z.; Sheth, K.
2016-10-01
The catalog contains sources selected down to a 5σ(σ~2.3uJy/beam) threshold. This catalog can be used for statistical analyses, accompanied with the corrections given in the data & catalog release paper. All completeness & bias corrections and source counts presented in the paper were calculated using this sample. The total fraction of spurious sources in the COSMOS 2 sq.deg. is below 2.7% within this catalog. However, an increase of spurious sources up to 24% at 5.0=5.5 for single component sources (MULTI=0). The total fraction of spurious sources in the COSMOS 2 sq.deg. within such a selected sample is below 0.4%, and the fraction of spurious sources is below 3% even at the lowest S/N (=5.5). Catalog Notes: 1. Maximum ID is 10966 although there are 10830 sources. Some IDs were removed by joining them into multi-component sources. 2. Peak surface brightness of sources [uJy/beam] is not reported, but can be obtained by multiplying SNR with RMS. 3. High NPIX usually indicates extended or very bright sources. 4. Reported positional errors on resolved and extended sources should be considered lower limits. 5. Multicomponent sources have errors and S/N column values set to -99.0 Additional data information: Catalog date: 21-Mar-2016 Source extractor: BLOBCAT v1.2 (http://blobcat.sourceforge.net/) Observations: 384 hours, VLA, S-band (2-4GHz), A+C array, 192 pointings Imaging software: CASA v4.2.2 (https://casa.nrao.edu/) Imaging algorithm: Multiscale multifrequency synthesis on single pointings Mosaic size: 30000x30000 pixels (3.3 GB) Pixel size: 0.2x0.2 arcsec2 Median rms noise in the COSMOS 2 sq.deg.: 2.3uJy/beam Beam is circular with FWHM=0.75 arcsec Bandwidth-smearing peak correction: 0% (no corrections applied) Resolved criteria: Sint/Speak>1+6*snr^(-1.44) Total area covered: 2.6 sq.deg. (1 data file).
Accurate indel prediction using paired-end short reads
2013-01-01
Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375
Assessing the determinants of evolutionary rates in the presence of noise.
Plotkin, Joshua B; Fraser, Hunter B
2007-05-01
Although protein sequences are known to evolve at vastly different rates, little is known about what determines their rate of evolution. However, a recent study using principal component regression (PCR) has concluded that evolutionary rates in yeast are primarily governed by a single determinant related to translation frequency. Here, we demonstrate that noise in biological data can confound PCRs, leading to spurious conclusions. When equalizing noise levels across 7 predictor variables used in previous studies, we find no evidence that protein evolution is dominated by a single determinant. Our results indicate that a variety of factors--including expression level, gene dispensability, and protein-protein interactions--may independently affect evolutionary rates in yeast. More accurate measurements or more sophisticated statistical techniques will be required to determine which one, if any, of these factors dominates protein evolution.
Quantifying entanglement in two-mode Gaussian states
NASA Astrophysics Data System (ADS)
Tserkis, Spyros; Ralph, Timothy C.
2017-12-01
Entangled two-mode Gaussian states are a key resource for quantum information technologies such as teleportation, quantum cryptography, and quantum computation, so quantification of Gaussian entanglement is an important problem. Entanglement of formation is unanimously considered a proper measure of quantum correlations, but for arbitrary two-mode Gaussian states no analytical form is currently known. In contrast, logarithmic negativity is a measure that is straightforward to calculate and so has been adopted by most researchers, even though it is a less faithful quantifier. In this work, we derive an analytical lower bound for entanglement of formation of generic two-mode Gaussian states, which becomes tight for symmetric states and for states with balanced correlations. We define simple expressions for entanglement of formation in physically relevant situations and use these to illustrate the problematic behavior of logarithmic negativity, which can lead to spurious conclusions.
Regression models for analyzing costs and their determinants in health care: an introductory review.
Gregori, Dario; Petrinco, Michele; Bo, Simona; Desideri, Alessandro; Merletti, Franco; Pagano, Eva
2011-06-01
This article aims to describe the various approaches in multivariable modelling of healthcare costs data and to synthesize the respective criticisms as proposed in the literature. We present regression methods suitable for the analysis of healthcare costs and then apply them to an experimental setting in cardiovascular treatment (COSTAMI study) and an observational setting in diabetes hospital care. We show how methods can produce different results depending on the degree of matching between the underlying assumptions of each method and the specific characteristics of the healthcare problem. The matching of healthcare cost models to the analytic objectives and characteristics of the data available to a study requires caution. The study results and interpretation can be heavily dependent on the choice of model with a real risk of spurious results and conclusions.
Describing excited state relaxation and localization in TiO 2 nanoparticles using TD-DFT
Berardo, Enrico; Hu, Han -Shi; van Dam, Hubertus J. J.; ...
2014-02-26
We have investigated the description of excited state relaxation in naked and hydrated TiO 2 nanoparticles using Time-Dependent Density Functional Theory (TD-DFT) with three common hybrid exchange-correlation (XC) potentials; B3LYP, CAM-B3LYP and BHLYP. Use of TD-CAM-B3LYP and TD-BHLYP yields qualitatively similar results for all structures, which are also consistent with predictions of coupled cluster theory for small particles. TD-B3LYP, in contrast, is found to make rather different predictions; including apparent conical intersections for certain particles that are not observed with TD-CAM-B3LYP nor with TD-BHLYP. In line with our previous observations for vertical excitations, the issue with TD-B3LYP appears to bemore » the inherent tendency of TD-B3LYP, and other XC potentials with no or a low percentage of Hartree-Fock Like Exchange, to spuriously stabilize the energy of charge-transfer (CT) states. Even in the case of hydrated particles, for which vertical excitations are generally well described with all XC potentials, the use of TD-B3LYP appears to result in CT-problems for certain particles. We hypothesize that the spurious stabilization of CT-states by TD-B3LYP even may drive the excited state optimizations to different excited state geometries than those obtained using TD-CAM-B3LYP or TD-BHLYP. In conclusion, focusing on the TD-CAM-B3LYP and TD-BHLYP results, excited state relaxation in naked and hydrated TiO 2 nanoparticles is predicted to be associated with a large Stokes’ shift.« less
Buhule, Olive D.; Minster, Ryan L.; Hawley, Nicola L.; Medvedovic, Mario; Sun, Guangyun; Viali, Satupaitea; Deka, Ranjan; McGarvey, Stephen T.; Weeks, Daniel E.
2014-01-01
Background: Batch effects in DNA methylation microarray experiments can lead to spurious results if not properly handled during the plating of samples. Methods: Two pilot studies examining the association of DNA methylation patterns across the genome with obesity in Samoan men were investigated for chip- and row-specific batch effects. For each study, the DNA of 46 obese men and 46 lean men were assayed using Illumina's Infinium HumanMethylation450 BeadChip. In the first study (Sample One), samples from obese and lean subjects were examined on separate chips. In the second study (Sample Two), the samples were balanced on the chips by lean/obese status, age group, and census region. We used methylumi, watermelon, and limma R packages, as well as ComBat, to analyze the data. Principal component analysis and linear regression were, respectively, employed to identify the top principal components and to test for their association with the batches and lean/obese status. To identify differentially methylated positions (DMPs) between obese and lean males at each locus, we used a moderated t-test. Results: Chip effects were effectively removed from Sample Two but not Sample One. In addition, dramatic differences were observed between the two sets of DMP results. After “removing” batch effects with ComBat, Sample One had 94,191 probes differentially methylated at a q-value threshold of 0.05 while Sample Two had zero differentially methylated probes. The disparate results from Sample One and Sample Two likely arise due to the confounding of lean/obese status with chip and row batch effects. Conclusion: Even the best possible statistical adjustments for batch effects may not completely remove them. Proper study design is vital for guarding against spurious findings due to such effects. PMID:25352862
Describing excited state relaxation and localization in TiO 2 nanoparticles using TD-DFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berardo, Enrico; Hu, Han -Shi; van Dam, Hubertus J. J.
We have investigated the description of excited state relaxation in naked and hydrated TiO 2 nanoparticles using Time-Dependent Density Functional Theory (TD-DFT) with three common hybrid exchange-correlation (XC) potentials; B3LYP, CAM-B3LYP and BHLYP. Use of TD-CAM-B3LYP and TD-BHLYP yields qualitatively similar results for all structures, which are also consistent with predictions of coupled cluster theory for small particles. TD-B3LYP, in contrast, is found to make rather different predictions; including apparent conical intersections for certain particles that are not observed with TD-CAM-B3LYP nor with TD-BHLYP. In line with our previous observations for vertical excitations, the issue with TD-B3LYP appears to bemore » the inherent tendency of TD-B3LYP, and other XC potentials with no or a low percentage of Hartree-Fock Like Exchange, to spuriously stabilize the energy of charge-transfer (CT) states. Even in the case of hydrated particles, for which vertical excitations are generally well described with all XC potentials, the use of TD-B3LYP appears to result in CT-problems for certain particles. We hypothesize that the spurious stabilization of CT-states by TD-B3LYP even may drive the excited state optimizations to different excited state geometries than those obtained using TD-CAM-B3LYP or TD-BHLYP. In conclusion, focusing on the TD-CAM-B3LYP and TD-BHLYP results, excited state relaxation in naked and hydrated TiO 2 nanoparticles is predicted to be associated with a large Stokes’ shift.« less
Importance of elastic finite-size effects: Neutral defects in ionic compounds
Burr, P. A.; Cooper, M. W. D.
2017-09-15
Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less
Importance of elastic finite-size effects: Neutral defects in ionic compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, P. A.; Cooper, M. W. D.
Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less
Importance of elastic finite-size effects: Neutral defects in ionic compounds
NASA Astrophysics Data System (ADS)
Burr, P. A.; Cooper, M. W. D.
2017-09-01
Small system sizes are a well-known source of error in density functional theory (DFT) calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite-size effects have been well characterized, but self-interaction of charge-neutral defects is often discounted or assumed to follow an asymptotic behavior and thus easily corrected with linear elastic theory. Here we show that elastic effects are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequately small supercells are used; moreover, the spurious self-interaction does not follow the behavior predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground-state structure of (charge-neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768, and 1500 atoms), and careful analysis determines that elastic, not electrostatic, effects are responsible. The spurious self-interaction was also observed in nonoxide ionic compounds irrespective of the computational method used, thereby resolving long-standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects is a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g., hybrid functionals) or when modeling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studied oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells: greater than 96 atoms.
Stress evaluation in displacement-based 2D nonlocal finite element method
NASA Astrophysics Data System (ADS)
Pisano, Aurora Angela; Fuschi, Paolo
2018-06-01
The evaluation of the stress field within a nonlocal version of the displacement-based finite element method is addressed. With the aid of two numerical examples it is shown as some spurious oscillations of the computed nonlocal stresses arise at sections (or zones) of macroscopic inhomogeneity of the examined structures. It is also shown how the above drawback, which renders the stress numerical solution unreliable, can be viewed as the so-called locking in FEM, a subject debated in the early seventies. It is proved that a well known remedy for locking, i.e. the reduced integration technique, can be successfully applied also in the nonlocal elasticity context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlin, P W
1989-06-01
As part of US Department of Energy-sponsored research on wind energy, a Mod-O wind turbine was used to drive a variable-speed, wound-rotor, induction generator. Energy resulting from the slip frequency voltage in the generator rotor was rectified to dc, inverted back to utility frequency ac, and injected into the power line. Spurious changing frequencies displayed in the generator output by a spectrum analyzer are caused by ripple on the dc link. No resonances of any of these moving frequencies were seen in spite of the presence of a bank of power factor correcting capacitors. 5 figs.
Origin of Low-Energy Spurious Peaks in Spectroscopic Measurements With Silicon Detectors
Giacomini, Gabriele; Huber, Alan; Redus, Robert; ...
2017-11-13
We report that when an uncollimated radioactive X-ray source illuminates a silicon PIN sensor, some ionizing events are generated in the nonimplanted gap between the active area of the sensor and the guard rings (GRs). Carriers can be collected by floating electrodes, i.e., electron accumulation layers at the silicon/oxide interface, and floating GRs. The crosstalk signals generated by these events create spurious peaks, replicas of the main peaks at either lower amplitude or of opposite polarity. Lastly, we explain this phenomenon as crosstalk caused by charge collected on these floating electrodes, which can be analyzed by means of an extensionmore » of Ramo theorem.« less
An efficient method to compute spurious end point contributions in PO solutions. [Physical Optics
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.; Pistorius, Carl W. I.
1987-01-01
A method is given to compute the spurious endpoint contributions in the physical optics solution for electromagnetic scattering from conducting bodies. The method is applicable to general three-dimensional structures. The only information required to use the method is the radius of curvature of the body at the shadow boundary. Thus, the method is very efficient for numerical computations. As an illustration, the method is applied to several bodies of revolution to compute the endpoint contributions for backscattering in the case of axial incidence. It is shown that in high-frequency situations, the endpoint contributions obtained using the method are equal to the true endpoint contributions.
Application of the algebraic RNG model for transition simulation. [renormalization group theory
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1990-01-01
The algebraic form of the RNG model of Yakhot and Orszag (1986) is investigated as a transition model for the Reynolds averaged boundary layer equations. It is found that the cubic equation for the eddy viscosity contains both a jump discontinuity and one spurious root. A yet unpublished transformation to a quartic equation is shown to remove the numerical difficulties associated with the discontinuity, but only at the expense of merging both the physical and spurious root of the cubic. Jumps between the branches of the resulting multiple-valued solution are found to lead to oscillations in flat plate transition calculations. Aside from the oscillations, the transition behavior is qualitatively correct.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Use of edge-based finite elements for solving three dimensional scattering problems
NASA Technical Reports Server (NTRS)
Chatterjee, A.; Jin, J. M.; Volakis, John L.
1991-01-01
Edge based finite elements are free from drawbacks associated with node based vectorial finite elements and are, therefore, ideal for solving 3-D scattering problems. The finite element discretization using edge elements is checked by solving for the resonant frequencies of a closed inhomogeneously filled metallic cavity. Great improvements in accuracy are observed when compared to the classical node based approach with no penalty in terms of computational time and with the expected absence of spurious modes. A performance comparison between the edge based tetrahedra and rectangular brick elements is carried out and tetrahedral elements are found to be more accurate than rectangular bricks for a given storage intensity. A detailed formulation for the scattering problem with various approaches for terminating the finite element mesh is also presented.
Aland, Sebastian; Lowengrub, John; Voigt, Axel
2012-10-01
Colloid particles that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. The fluids together with the interfacial colloids form an emulsion with interesting material properties and offer an important route to new soft materials. A promising approach to simulate these emulsions was presented in Aland et al. [Phys. Fluids 23, 062103 (2011)], where a Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase fluid system was combined with a surface phase-field-crystal model for the microscopic colloidal particles along the interface. Unfortunately this model leads to spurious velocities which require very fine spatial and temporal resolutions to accurately and stably simulate. In this paper we develop an improved Navier-Stokes-Cahn-Hilliard-surface phase-field-crystal model based on the principles of mass conservation and thermodynamic consistency. To validate our approach, we derive a sharp interface model and show agreement with the improved diffuse interface model. Using simple flow configurations, we show that the new model has much better properties and does not lead to spurious velocities. Finally, we demonstrate the solid-like behavior of the crystallized interface by simulating the fall of a solid ball through a colloid-laden multiphase fluid.
Pelletier, Mathew G; Viera, Joseph A; Wanjura, John; Holt, Greg
2010-01-01
The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties.
Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar
2016-03-01
Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.
NASA Astrophysics Data System (ADS)
De Filippis, G.; Noël, J. P.; Kerschen, G.; Soria, L.; Stephan, C.
2017-09-01
The introduction of the frequency-domain nonlinear subspace identification (FNSI) method in 2013 constitutes one in a series of recent attempts toward developing a realistic, first-generation framework applicable to complex structures. If this method showed promising capabilities when applied to academic structures, it is still confronted with a number of limitations which needs to be addressed. In particular, the removal of nonphysical poles in the identified nonlinear models is a distinct challenge. In the present paper, it is proposed as a first contribution to operate directly on the identified state-space matrices to carry out spurious pole removal. A modal-space decomposition of the state and output matrices is examined to discriminate genuine from numerical poles, prior to estimating the extended input and feedthrough matrices. The final state-space model thus contains physical information only and naturally leads to nonlinear coefficients free of spurious variations. Besides spurious variations due to nonphysical poles, vibration modes lying outside the frequency band of interest may also produce drifts of the nonlinear coefficients. The second contribution of the paper is to include residual terms, accounting for the existence of these modes. The proposed improved FNSI methodology is validated numerically and experimentally using a full-scale structure, the Morane-Saulnier Paris aircraft.
Finite element procedures for time-dependent convection-diffusion-reaction systems
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Park, Y. J.; Deans, H. A.
1988-01-01
New finite element procedures based on the streamline-upwind/Petrov-Galerkin formulations are developed for time-dependent convection-diffusion-reaction equations. These procedures minimize spurious oscillations for convection-dominated and reaction-dominated problems. The results obtained for representative numerical examples are accurate with minimal oscillations. As a special application problem, the single-well chemical tracer test (a procedure for measuring oil remaining in a depleted field) is simulated numerically. The results show the importance of temperature effects on the interpreted value of residual oil saturation from such tests.
Time-domain model of gyroklystrons with diffraction power input and output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginzburg, N. S., E-mail: ginzburg@appl.sci-nnov.ru; Rozental, R. M.; Sergeev, A. S.
A time-domain theory of gyroklystrons with diffraction input and output has been developed. The theory is based on the description of the wave excitation and propagation by a parabolic equation. The results of the simulations are in good agreement with the experimental studies of two-cavity gyroklystrons operating at the first and second cyclotron harmonics. Along with the basic characteristics of the amplification regimes, such as the gain and efficiency, the developed method makes it possible to define the conditions of spurious self-excitation and frequency-locking by an external signal.
Dexter, Franklin; Ledolter, Johannes
2003-07-01
Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.
Predictors of Start of Different Antidepressants in Patient Charts among Patients with Depression
Kim, Hyungjin Myra; Zivin, Kara; Choe, Hae Mi; Stano, Clare M.; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia
2016-01-01
Background In usual psychiatric care, antidepressant treatments are selected based on physician and patient preferences rather than being randomly allocated, resulting in spurious associations between these treatments and outcome studies. Objectives To identify factors recorded in electronic medical chart progress notes predictive of antidepressant selection among patients who had received a depression diagnosis. Methods This retrospective study sample consisted of 556 randomly selected Veterans Health Administration (VHA) patients diagnosed with depression from April 1, 1999 to September 30, 2004, stratified by the antidepressant agent, geographic region, gender, and year of depression cohort entry. Predictors were obtained from administrative data, and additional variables were abstracted from electronic medical chart notes in the year prior to the start of the antidepressant in five categories: clinical symptoms and diagnoses, substance use, life stressors, behavioral/ideation measures (e.g., suicide attempts), and treatments received. Multinomial logistic regression analysis was used to assess the predictors associated with different antidepressant prescribing, and adjusted relative risk ratios (RRR) are reported. Results Of the administrative data-based variables, gender, age, illicit drug abuse or dependence, and number of psychiatric medications in prior year were significantly associated with antidepressant selection. After adjusting for administrative data-based variables, sleep problems (RRR = 2.47) or marital issues (RRR = 2.64) identified in the charts were significantly associated with prescribing mirtazapine rather than sertraline; however, no other chart-based variables showed a significant association or an association with a large magnitude. Conclusion Some chart data-based variables were predictive of antidepressant selection, but we neither found many nor found them highly predictive of antidepressant selection in patients treated for depression. PMID:25943003
Battistoni, Andrea; Bencivenga, Filippo; Fioretto, Daniele; Masciovecchio, Claudio
2014-10-15
In this Letter, we present a simple method to avoid the well-known spurious contributions in the Brillouin light scattering (BLS) spectrum arising from the finite aperture of collection optics. The method relies on the use of special spatial filters able to select the scattered light with arbitrary precision around a given value of the momentum transfer (Q). We demonstrate the effectiveness of such filters by analyzing the BLS spectra of a reference sample as a function of scattering angle. This practical and inexpensive method could be an extremely useful tool to fully exploit the potentiality of Brillouin acoustic spectroscopy, as it will easily allow for effective Q-variable experiments with unparalleled luminosity and resolution.
Interplay of the Quality of Ciprofloxacin and Antibiotic Resistance in Developing Countries
Sharma, Deepali; Patel, Rahul P.; Zaidi, Syed Tabish R.; Sarker, Md. Moklesur Rahman; Lean, Qi Ying; Ming, Long C.
2017-01-01
Ciprofloxacin, a second generation broad spectrum fluoroquinolone, is active against both Gram-positive and Gram-negative bacteria. Ciprofloxacin has a high oral bioavailability and a large volume of distribution. It is used for the treatment of a wide range of infections including urinary tract infections caused by susceptible bacteria. However, the availability and use of substandard and spurious quality of oral ciprofloxacin formulations in the developing countries has been thought to have contributed toward increased risk of treatment failure and bacterial resistance. Therefore, quality control and bioequivalence studies of the commercially available oral ciprofloxacin formulations should be monitored. Appropriate actions should be taken against offending manufacturers in order to prevent the sale of substandard and spurious quality of ciprofloxacin formulations. PMID:28871228
A Method for Constructing Informative Priors for Bayesian Modeling of Occupational Hygiene Data.
Quick, Harrison; Huynh, Tran; Ramachandran, Gurumurthy
2017-01-01
In many occupational hygiene settings, the demand for more accurate, more precise results is at odds with limited resources. To combat this, practitioners have begun using Bayesian methods to incorporate prior information into their statistical models in order to obtain more refined inference from their data. This is not without risk, however, as incorporating prior information that disagrees with the information contained in data can lead to spurious conclusions, particularly if the prior is too informative. In this article, we propose a method for constructing informative prior distributions for normal and lognormal data that are intuitive to specify and robust to bias. To demonstrate the use of these priors, we walk practitioners through a step-by-step implementation of our priors using an illustrative example. We then conclude with recommendations for general use. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
NASA Astrophysics Data System (ADS)
Tan, Kuan Yen; Partanen, Matti; Lake, Russell E.; Govenius, Joonas; Masuda, Shumpei; Möttönen, Mikko
2017-05-01
Quantum technology promises revolutionizing applications in information processing, communications, sensing and modelling. However, efficient on-demand cooling of the functional quantum degrees of freedom remains challenging in many solid-state implementations, such as superconducting circuits. Here we demonstrate direct cooling of a superconducting resonator mode using voltage-controllable electron tunnelling in a nanoscale refrigerator. This result is revealed by a decreased electron temperature at a resonator-coupled probe resistor, even for an elevated electron temperature at the refrigerator. Our conclusions are verified by control experiments and by a good quantitative agreement between theory and experimental observations at various operation voltages and bath temperatures. In the future, we aim to remove spurious dissipation introduced by our refrigerator and to decrease the operational temperature. Such an ideal quantum-circuit refrigerator has potential applications in the initialization of quantum electric devices. In the superconducting quantum computer, for example, fast and accurate reset of the quantum memory is needed.
Re-evaluation of colorimetric Cl- data from natural waters with DOC
Norton, S.A.; Handlet, M.J.; Kahl, J.S.; Peters, N.E.
1996-01-01
Colorimetric Cl- data from natural solutions that contain dissolved organic carbon (DOC) may be biased high. We evaluated aquatic Cl- concentrations in ecosystem compartments at the Bear Brook Watershed, Maine, and from lakes in Maine, using ion chromatography and colorimetry. DOC imparts a positive interference on colorimetric Cl- results proportional to DOC concentrations at approximately 0.8 ??eq Cl-/L per mg DOC/L. The interference is not a function of Cl- concentration. The resulting bias in concentrations of Cl- may be 50% or more of typical environmental values for Cl- in areas remote from atmospheric deposition of marine aerosols. Such biased data in the literature appear to have led to spurious conclusions about recycling of Cl- by forests, the usefulness of Cl- as a conservative tracer in watershed studies, and calculations of elemental budgets, ion balance, charge density of DOC, and dry deposition factors.
The Funding of Long-Term Care in Canada: What Do We Know, What Should We Know?
Grignon, Michel; Spencer, Byron G
2018-06-01
ABSTRACTLong-term care is a growing component of health care spending but how much is spent or who bears the cost is uncertain, and the measures vary depending on the source used. We drew on regularly published series and ad hoc publications to compile preferred estimates of the share of long-term care spending in total health care spending, the private share of long-term care spending, and the share of residential care within long-term care. For each series, we compared estimates obtainable from published sources (CIHI [Canadian Institute for Health Information] and OECD [Organization for Economic Cooperation and Development]) with our preferred estimates. We conclude that using published series without adjustment would lead to spurious conclusions on the level and evolution of spending on long-term care in Canada as well as on the distribution of costs between private and public funders and between residential and home care.
Numerical shockwave anomalies in presence of hydraulic jumps in the SWE with variable bed elevation.
NASA Astrophysics Data System (ADS)
Navas-Montilla, Adrian; Murillo, Javier
2017-04-01
When solving the shallow water equations appropriate numerical solvers must allow energy-dissipative solutions in presence of steady and unsteady hydraulic jumps. Hydraulic jumps are present in surface flows and may produce significant morphological changes. Unfortunately, it has been documented that some numerical anomalies may appear. These anomalies are the incorrect positioning of steady jumps and the presence of a spurious spike of discharge inside the cell containing the jump produced by a non-linearity of the Hugoniot locus connecting the states at both sides of the jump. Therefore, this problem remains unresolved in the context of Godunov's schemes applied to shallow flows. This issue is usually ignored as it does not affect to the solution in steady cases. However, it produces undesirable spurious oscillations in transient cases that can lead to misleading conclusions when moving to realistic scenarios. Using spike-reducing techniques based on the construction of interpolated fluxes, it is possible to define numerical methods including discontinuous topography that reduce the presence of the aforementioned numerical anomalies. References: T. W. Roberts, The behavior of flux difference splitting schemes near slowly moving shock waves, J. Comput. Phys., 90 (1990) 141-160. Y. Stiriba, R. Donat, A numerical study of postshock oscillations in slowly moving shock waves, Comput. Math. with Appl., 46 (2003) 719-739. E. Johnsen, S. K. Lele, Numerical errors generated in simulations of slowly moving shocks, Center for Turbulence Research, Annual Research Briefs, (2008) 1-12. D. W. Zaide, P. L. Roe, Flux functions for reducing numerical shockwave anomalies. ICCFD7, Big Island, Hawaii, (2012) 9-13. D. W. Zaide, Numerical Shockwave Anomalies, PhD thesis, Aerospace Engineering and Scientific Computing, University of Michigan, 2012. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux-ADER schemes with application to hyperbolic conservation laws with geometric source terms, J. Comput. Phys. 317 (2016) 108-147. J. Murillo and A. Navas-Montilla, A comprehensive explanation and exercise of the source terms in hyperbolic systems using Roe type solutions. Application to the 1D-2D shallow water equations, Advances in Water Resources {98} (2016) 70-96.
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.
A skeleton family generator via physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2009-01-01
This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.
van Iterson, Maarten; van Zwet, Erik W; Heijmans, Bastiaan T
2017-01-27
We show that epigenome- and transcriptome-wide association studies (EWAS and TWAS) are prone to significant inflation and bias of test statistics, an unrecognized phenomenon introducing spurious findings if left unaddressed. Neither GWAS-based methodology nor state-of-the-art confounder adjustment methods completely remove bias and inflation. We propose a Bayesian method to control bias and inflation in EWAS and TWAS based on estimation of the empirical null distribution. Using simulations and real data, we demonstrate that our method maximizes power while properly controlling the false positive rate. We illustrate the utility of our method in large-scale EWAS and TWAS meta-analyses of age and smoking.
Rusterholz, Thomas; Achermann, Peter; Dürr, Roland; Koenig, Thomas; Tarokh, Leila
2017-06-01
Investigating functional connectivity between brain networks has become an area of interest in neuroscience. Several methods for investigating connectivity have recently been developed, however, these techniques need to be applied with care. We demonstrate that global field synchronization (GFS), a global measure of phase alignment in the EEG as a function of frequency, must be applied considering signal processing principles in order to yield valid results. Multichannel EEG (27 derivations) was analyzed for GFS based on the complex spectrum derived by the fast Fourier transform (FFT). We examined the effect of window functions on GFS, in particular of non-rectangular windows. Applying a rectangular window when calculating the FFT revealed high GFS values for high frequencies (>15Hz) that were highly correlated (r=0.9) with spectral power in the lower frequency range (0.75-4.5Hz) and tracked the depth of sleep. This turned out to be spurious synchronization. With a non-rectangular window (Tukey or Hanning window) these high frequency synchronization vanished. Both, GFS and power density spectra significantly differed for rectangular and non-rectangular windows. Previous papers using GFS typically did not specify the applied window and may have used a rectangular window function. However, the demonstrated impact of the window function raises the question of the validity of some previous findings at higher frequencies. We demonstrated that it is crucial to apply an appropriate window function for determining synchronization measures based on a spectral approach to avoid spurious synchronization in the beta/gamma range. Copyright © 2017 Elsevier B.V. All rights reserved.
Edge Probability and Pixel Relativity-Based Speckle Reducing Anisotropic Diffusion.
Mishra, Deepak; Chaudhury, Santanu; Sarkar, Mukul; Soin, Arvinder Singh; Sharma, Vivek
2018-02-01
Anisotropic diffusion filters are one of the best choices for speckle reduction in the ultrasound images. These filters control the diffusion flux flow using local image statistics and provide the desired speckle suppression. However, inefficient use of edge characteristics results in either oversmooth image or an image containing misinterpreted spurious edges. As a result, the diagnostic quality of the images becomes a concern. To alleviate such problems, a novel anisotropic diffusion-based speckle reducing filter is proposed in this paper. A probability density function of the edges along with pixel relativity information is used to control the diffusion flux flow. The probability density function helps in removing the spurious edges and the pixel relativity reduces the oversmoothing effects. Furthermore, the filtering is performed in superpixel domain to reduce the execution time, wherein a minimum of 15% of the total number of image pixels can be used. For performance evaluation, 31 frames of three synthetic images and 40 real ultrasound images are used. In most of the experiments, the proposed filter shows a better performance as compared to the state-of-the-art filters in terms of the speckle region's signal-to-noise ratio and mean square error. It also shows a comparative performance for figure of merit and structural similarity measure index. Furthermore, in the subjective evaluation, performed by the expert radiologists, the proposed filter's outputs are preferred for the improved contrast and sharpness of the object boundaries. Hence, the proposed filtering framework is suitable to reduce the unwanted speckle and improve the quality of the ultrasound images.
Nicodemus, Kristin K; Malley, James D; Strobl, Carolin; Ziegler, Andreas
2010-02-27
Random forests (RF) have been increasingly used in applications such as genome-wide association and microarray studies where predictor correlation is frequently observed. Recent works on permutation-based variable importance measures (VIMs) used in RF have come to apparently contradictory conclusions. We present an extended simulation study to synthesize results. In the case when both predictor correlation was present and predictors were associated with the outcome (HA), the unconditional RF VIM attributed a higher share of importance to correlated predictors, while under the null hypothesis that no predictors are associated with the outcome (H0) the unconditional RF VIM was unbiased. Conditional VIMs showed a decrease in VIM values for correlated predictors versus the unconditional VIMs under HA and was unbiased under H0. Scaled VIMs were clearly biased under HA and H0. Unconditional unscaled VIMs are a computationally tractable choice for large datasets and are unbiased under the null hypothesis. Whether the observed increased VIMs for correlated predictors may be considered a "bias" - because they do not directly reflect the coefficients in the generating model - or if it is a beneficial attribute of these VIMs is dependent on the application. For example, in genetic association studies, where correlation between markers may help to localize the functionally relevant variant, the increased importance of correlated predictors may be an advantage. On the other hand, we show examples where this increased importance may result in spurious signals.
NASA Astrophysics Data System (ADS)
Hurtado-Cardador, Manuel; Urrutia-Fucugauchi, Jaime
2006-12-01
Since 1947 Petroleos Mexicanos (Pemex) has conducted oil exploration projects using potential field methods. Geophysical exploration companies under contracts with Pemex carried out gravity anomaly surveys that were referred to different floating data. Each survey comprises observations of gravity stations along highways, roads and trails at intervals of about 500 m. At present, 265 separate gravimeter surveys that cover 60% of the Mexican territory (mainly in the oil producing regions of Mexico) are available. This gravity database represents the largest, highest spatial resolution information, and consequently has been used in the geophysical data compilations for the Mexico and North America gravity anomaly maps. Regional integration of gravimeter surveys generates gradients and spurious anomalies in the Bouguer anomaly maps at the boundaries of the connected surveys due to the different gravity base stations utilized. The main objective of this study is to refer all gravimeter surveys from Pemex to a single new first-order gravity base station network, in order to eliminate problems of gradients and spurious anomalies. A second objective is to establish a network of permanent gravity base stations (BGP), referred to a single base from the World Gravity System. Four regional loops of BGP covering eight States of Mexico were established to support the tie of local gravity base stations from each of the gravimeter surveys located in the vicinity of these loops. The third objective is to add the gravity constants, measured and calculated, for each of the 265 gravimeter surveys to their corresponding files in the Pemex and Instituto Mexicano del Petroleo database. The gravity base used as the common datum is the station SILAG 9135-49 (Latin American System of Gravity) located in the National Observatory of Tacubaya in Mexico City. We present the results of the installation of a new gravity base network in northeastern Mexico, reference of the 43 gravimeter surveys to the new network, the regional compilation of Bouguer gravity data and a new updated Bouguer gravity anomaly map for northeastern Mexico.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Smoothed-particle-hydrodynamics modeling of dissipation mechanisms in gravity waves.
Colagrossi, Andrea; Souto-Iglesias, Antonio; Antuono, Matteo; Marrone, Salvatore
2013-02-01
The smoothed-particle-hydrodynamics (SPH) method has been used to study the evolution of free-surface Newtonian viscous flows specifically focusing on dissipation mechanisms in gravity waves. The numerical results have been compared with an analytical solution of the linearized Navier-Stokes equations for Reynolds numbers in the range 50-5000. We found that a correct choice of the number of neighboring particles is of fundamental importance in order to obtain convergence towards the analytical solution. This number has to increase with higher Reynolds numbers in order to prevent the onset of spurious vorticity inside the bulk of the fluid, leading to an unphysical overdamping of the wave amplitude. This generation of spurious vorticity strongly depends on the specific kernel function used in the SPH model.
On the effect of using the Shapiro filter to smooth winds on a sphere
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Balgovind, R. C.
1984-01-01
Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.
The Osher scheme for non-equilibrium reacting flows
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.
Summation rules for a fully nonlocal energy-based quasicontinuum method
NASA Astrophysics Data System (ADS)
Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.
2015-09-01
The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with automatic mesh adaptation. Overall, our findings provide guidelines for the selection of summation rules for the fully nonlocal energy-based QC method.
Aland, Sebastian; Lowengrub, John; Voigt, Axel
2013-01-01
Colloid particles that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. The fluids together with the interfacial colloids form an emulsion with interesting material properties and offer an important route to new soft materials. A promising approach to simulate these emulsions was presented in Aland et al. [Phys. Fluids 23, 062103 (2011)], where a Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase fluid system was combined with a surface phase-field-crystal model for the microscopic colloidal particles along the interface. Unfortunately this model leads to spurious velocities which require very fine spatial and temporal resolutions to accurately and stably simulate. In this paper we develop an improved Navier-Stokes-Cahn-Hilliard-surface phase-field-crystal model based on the principles of mass conservation and thermodynamic consistency. To validate our approach, we derive a sharp interface model and show agreement with the improved diffuse interface model. Using simple flow configurations, we show that the new model has much better properties and does not lead to spurious velocities. Finally, we demonstrate the solid-like behavior of the crystallized interface by simulating the fall of a solid ball through a colloid-laden multiphase fluid. PMID:23214691
Zhang, Shuzeng; Li, Xiongbing; Jeong, Hyunjo; Hu, Hongwei
2018-05-12
Angle beam wedge transducers are widely used in nonlinear Rayleigh wave experiments as they can generate Rayleigh wave easily and produce high intensity nonlinear waves for detection. When such a transducer is used, the spurious harmonics (source nonlinearity) and wave diffraction may occur and will affect the measurement results, so it is essential to fully understand its acoustic nature. This paper experimentally investigates the nonlinear Rayleigh wave beam fields generated and received by angle beam wedge transducers, in which the theoretical predictions are based on the acoustic model developed previously for angle beam wedge transducers [S. Zhang, et al., Wave Motion, 67, 141-159, (2016)]. The source of the spurious harmonics is fully characterized by scrutinizing the nonlinear Rayleigh wave behavior in various materials with different driving voltages. Furthermore, it is shown that the attenuation coefficients for both fundamental and second harmonic Rayleigh waves can be extracted by comparing the measurements with the predictions when the experiments are conducted at many locations along the propagation path. A technique is developed to evaluate the material nonlinearity by making appropriate corrections for source nonlinearity, diffraction and attenuation. The nonlinear parameters of three aluminum alloy specimens - Al 2024, Al 6061 and Al 7075 - are measured, and the results indicate that the measurement results can be significantly improved using the proposed method. Copyright © 2018. Published by Elsevier B.V.
Pelletier, Mathew G.; Viera, Joseph A.; Wanjura, John; Holt, Greg
2010-01-01
The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties. PMID:22163668
Lilienfeld, Scott O; Ritschel, Lorie A; Lynn, Steven Jay; Cautin, Robin L; Latzman, Robert D
2014-07-01
The past 40 years have generated numerous insights regarding errors in human reasoning. Arguably, clinical practice is the domain of applied psychology in which acknowledging and mitigating these errors is most crucial. We address one such set of errors here, namely, the tendency of some psychologists and other mental health professionals to assume that they can rely on informal clinical observations to infer whether treatments are effective. We delineate four broad, underlying cognitive impediments to accurately evaluating improvement in psychotherapy-naive realism, confirmation bias, illusory causation, and the illusion of control. We then describe 26 causes of spurious therapeutic effectiveness (CSTEs), organized into a taxonomy of three overarching categories: (a) the perception of client change in its actual absence, (b) misinterpretations of actual client change stemming from extratherapeutic factors, and (c) misinterpretations of actual client change stemming from nonspecific treatment factors. These inferential errors can lead clinicians, clients, and researchers to misperceive useless or even harmful psychotherapies as effective. We (a) examine how methodological safeguards help to control for different CSTEs, (b) delineate fruitful directions for research on CSTEs, and (c) consider the implications of CSTEs for everyday clinical practice. An enhanced appreciation of the inferential problems posed by CSTEs may narrow the science-practice gap and foster a heightened appreciation of the need for the methodological safeguards afforded by evidence-based practice. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Nangia, Nishant; Patankar, Neelesh A.; Bhalla, Amneet P. S.
2017-11-01
Fictitious domain methods for simulating fluid-structure interaction (FSI) have been gaining popularity in the past few decades because of their robustness in handling arbitrarily moving bodies. Often the transient net hydrodynamic forces and torques on the body are desired quantities for these types of simulations. In past studies using immersed boundary (IB) methods, force measurements are contaminated with spurious oscillations due to evaluation of possibly discontinuous spatial velocity of pressure gradients within or on the surface of the body. Based on an application of the Reynolds transport theorem, we present a moving control volume (CV) approach to computing the net forces and torques on a moving body immersed in a fluid. The approach is shown to be accurate for a wide array of FSI problems, including flow past stationary and moving objects, Stokes flow, and high Reynolds number free-swimming. The approach only requires far-field (smooth) velocity and pressure information, thereby suppressing spurious force oscillations and eliminating the need for any filtering. The proposed moving CV method is not limited to a specific IB method and is straightforward to implement within an existing parallel FSI simulation software. This work is supported by NSF (Award Numbers SI2-SSI-1450374, SI2-SSI-1450327, and DGE-1324585), the US Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231), and NIH (Award Number HL117163).
Tropf, Felix C; Mandemakers, Jornt J
2017-02-01
A large body of literature has demonstrated a positive relationship between education and age at first birth. However, this relationship may be partly spurious because of family background factors that cannot be controlled for in most research designs. We investigate the extent to which education is causally related to later age at first birth in a large sample of female twins from the United Kingdom (N = 2,752). We present novel estimates using within-identical twin and biometric models. Our findings show that one year of additional schooling is associated with about one-half year later age at first birth in ordinary least squares (OLS) models. This estimate reduced to only a 1.5-month later age at first birth for the within-identical twin model controlling for all shared family background factors (genetic and family environmental). Biometric analyses reveal that it is mainly influences of the family environment-not genetic factors-that cause spurious associations between education and age at first birth. Last, using data from the Office for National Statistics, we demonstrate that only 1.9 months of the 2.74 years of fertility postponement for birth cohorts 1944-1967 could be attributed to educational expansion based on these estimates. We conclude that the rise in educational attainment alone cannot explain differences in fertility timing between cohorts.
Thermal and dynamic range characterization of a photonics-based RF amplifier
NASA Astrophysics Data System (ADS)
Noque, D. F.; Borges, R. M.; Muniz, A. L. M.; Bogoni, A.; Cerqueira S., Arismar, Jr.
2018-05-01
This work reports a thermal and dynamic range characterization of an ultra-wideband photonics-based RF amplifier for microwave and mm-waves future 5G optical-wireless networks. The proposed technology applies the four-wave mixing nonlinear effect to provide RF amplification in analog and digital radio-over-fiber systems. The experimental analysis from 300 kHz to 50 GHz takes into account different figures of merit, such as RF gain, spurious-free dynamic range and RF output power stability as a function of temperature. The thermal characterization from -10 to +70 °C demonstrates a 27 dB flat photonics-assisted RF gain over the entire frequency range under real operational conditions of a base station for illustrating the feasibility of the photonics-assisted RF amplifier for 5G networks.
Performance of the SIR-B digital image processing subsystem
NASA Technical Reports Server (NTRS)
Curlander, J. C.
1986-01-01
A ground-based system to generate digital SAR image products has been developed and implemented in support of the SIR-B mission. This system is designed to achieve the maximum throughput while meeting strict image fidelity criteria. Its capabilities include: automated radiometric and geometric correction of the output imagery; high-precision absolute location without tiepoint registration; filtering of the raw data to remove spurious signals from alien radars; and automated catologing to maintain a full set of radar and image production facility in support of the SIR-B science investigators routinely produces over 80 image frames per week.
Heiens, R A; Pleshko, L P
1997-01-01
The present article applies the customer loyalty classification framework developed by Dick and Basu (1994) to the health care industry. Based on a two factor classification, consisting of repeat patronage and relative attitude, four categories of patient loyalty are proposed and examined, including true loyalty, latent loyalty, spurious loyalty, and no loyalty. Data is collected and the four patient loyalty categories are profiled and compared on the basis of perceived risk, product class importance, provider decision importance, provider awareness, provider consideration, number of providers visited, and self-reported loyalty.
Investigation of electrical noise in selenium-immersed thermistor bolometers
NASA Technical Reports Server (NTRS)
Tarpley, J. L.; Sarmiento, P. D.
1980-01-01
The selenium immersed, thermistor bolometer, IR detector failed due to spurious and escalating electrical noise outburst as a function of time at elevated temperatures during routine ground based testing in a space simulated environment. Spectrographic analysis of failed bolometers revealed selenium pure zones in the insulating selenium arsenic (Se-As) glass film which surrounds the active sintered Mn, Ni, Co oxide flake. The selenium pure film was identified as a potentially serious failure mechanism. Significant changes were instituted in the manufacturing techniques along with more stringent process controls which eliminated the selenium pure film and successfully produced 22study bolometers.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1995-01-01
Particle Image Velocimetry provides a means of measuring the instantaneous 2-component velocity field across a planar region of a seeded flowfield. In this work only two camera, single exposure images are considered where both cameras have the same view of the illumination plane. Two competing techniques which yield unambiguous velocity vector direction information have been widely used for reducing the single exposure, multiple image data: cross-correlation and particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. The correlation technique requires identification of the correlation peak on the correlation plane corresponding to the average displacement of particles across the subregion. Noise on the images and particle dropout contribute to spurious peaks on the correlation plane, leading to misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak on the correlation plane, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus velocity. The advantage of this technique is the improved spatial resolution which is available from the particle tracking operation. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two staged approach offers a velocimetric technique capable of measuring particle velocities with high spatial resolution over a broad range of seeding densities.
Hyper-Ramsey spectroscopy of optical clock transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yudin, V. I.; Taichenachev, A. V.; Oates, C. W.
2010-07-15
We present nonstandard optical Ramsey schemes that use pulses individually tailored in duration, phase, and frequency to cancel spurious frequency shifts related to the excitation itself. In particular, the field shifts and their uncertainties can be radically suppressed (by two to four orders of magnitude) in comparison with the usual Ramsey method (using two equal pulses) as well as with single-pulse Rabi spectroscopy. Atom interferometers and optical clocks based on two-photon transitions, heavily forbidden transitions, or magnetically induced spectroscopy could significantly benefit from this method. In the latter case, these frequency shifts can be suppressed considerably below a fractional levelmore » of 10{sup -17}. Moreover, our approach opens the door for high-precision optical clocks based on direct frequency comb spectroscopy.« less
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Assessing Low-Intensity Relationships in Complex Networks
Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes
2016-01-01
Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes. PMID:27096435
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
NASA Astrophysics Data System (ADS)
Kohjiro, Satoshi; Hirayama, Fuminori
2018-07-01
A novel approach, frequency-domain cascading microwave multiplexers (MW-Mux), has been proposed and its basic operation has been demonstrated to increase the number of pixels multiplexed in a readout line U of MW-Mux for superconducting detector arrays. This method is an alternative to the challenging development of wideband, large power, and spurious-free room-temperature (300 K) electronics. The readout system for U pixels consists of four main parts: (1) multiplexer chips connected in series those contain U superconducting resonators in total. (2) A cryogenic high-electron-mobility transistor amplifier (HEMT). (3) A 300 K microwave frequency comb generator based on N(≡U/M) parallel units of digital-to-analog converters (DAC). (4) N parallel units of 300 K analog-to-digital converters (ADC). Here, M is the number of tones each DAC produces and each ADC handles. The output signal of U detectors multiplexed at the cryogenic stage is transmitted through a cable to the room temperature and divided into N processors where each handles M pixels. Due to the reduction factor of 1/N, U is not anymore dominated by the 300 K electronics but can be increased up to the potential value determined by either the bandwidth or the spurious-free power of the HEMT. Based on experimental results on the prototype system with N = 2 and M = 3, neither excess inter-pixel crosstalk nor excess noise has been observed in comparison with conventional MW-Mux. This indicates that the frequency-domain cascading MW-Mux provides the full (100%) usage of the HEMT band by assigning N 300 K bands on the frequency axis without inter-band gaps.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries
NASA Astrophysics Data System (ADS)
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Assessing Low-Intensity Relationships in Complex Networks.
Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes
2016-01-01
Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes.
Problems and programming for analysis of IUE high resolution data for variability
NASA Technical Reports Server (NTRS)
Grady, C. A.
1981-01-01
Observations of variability in stellar winds provide an important probe of their dynamics. It is crucial however to know that any variability seen in a data set can be clearly attributed to the star and not to instrumental or data processing effects. In the course of analysis of IUE high resolution data of alpha Cam and other O, B and Wolf-Rayet stars several effects were found which cause spurious variability or spurious spectral features in our data. Programming was developed to partially compensate for these effects using the Interactive Data language (IDL) on the LASP PDP 11/34. Use of an interactive language such as IDL is particularly suited to analysis of variability data as it permits use of efficient programs coupled with the judgement of the scientist at each stage of processing.
NASA Astrophysics Data System (ADS)
Katori, Makoto
1988-12-01
A new scheme of the coherent-anomaly method (CAM) is proposed to study critical phenomena in the models for which a mean-field description gives spurious first-order phase transition. A canonical series of mean-field-type approximations are constructed so that the spurious discontinuity should vanish asymptotically as the approximate critical temperature approachs the true value. The true value of the critical exponents β and γ are related to the coherent-anomaly exponents defined among the classical approximations. The formulation is demonstrated in the two-dimensional q-state Potts models for q{=}3 and 4. The result shows that the present method enables us to estimate the critical exponents with high accuracy by using the date of the cluster-mean-field approximations.
Radio Frequency Compatibility of an RFID Tag on Glideslope Navigation Receivers
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Mielnik, John J.
2008-01-01
A process is demonstrated to show compatibility between a radio frequency identification (RFID) tag and an aircraft glideslope (GS) radio receiver. The particular tag chosen was previously shown to have significant peak spurious emission levels that far exceeded the emission limits in the GS aeronautical band. The spurious emissions are emulated in the study by capturing the RFID fundamental transmission and playing back the signal in the GS band. The signal capturing and playback are achieved with a vector signal generator and a spectrum analyzer that can output the in-phase and quadrature components (IQ). The simulated interference signal is combined with a desired GS signal before being injected into a GS receiver s antenna port for interference threshold determination. Minimum desired propagation loss values to avoid interference are then computed and compared against actual propagation losses for several aircraft.
Portable Wireless Device Threat Assessment for Aircraft Navigation Radios
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This paper addresses the concern for Wireless Local Area Network devices and two-way radios to cause electromagnetic interference to aircraft navigation radio systems. Spurious radiated emissions from various IEEE 802.11a, 802.11b, and Bluetooth devices are characterized using reverberation chambers. The results are compared with baseline emissions from standard laptop computer and personal digital assistants (PDAs) that are currently allowed for use on aircraft. The results indicate that the WLAN devices tested are not more of a threat to aircraft navigation radios than standard laptop computers and PDAs in most aircraft bands. In addition, spurious radiated emission data from seven pairs of two-way radios are provided. These two-way radios emit at much higher levels in the bands considered. A description of the measurement process, device modes of operation and the measurement results are reported.
The rate of cis-trans conformation errors is increasing in low-resolution crystal structures.
Croll, Tristan Ian
2015-03-01
Cis-peptide bonds (with the exception of X-Pro) are exceedingly rare in native protein structures, yet a check for these is not currently included in the standard workflow for some common crystallography packages nor in the automated quality checks that are applied during submission to the Protein Data Bank. This appears to be leading to a growing rate of inclusion of spurious cis-peptide bonds in low-resolution structures both in absolute terms and as a fraction of solved residues. Most concerningly, it is possible for structures to contain very large numbers (>1%) of spurious cis-peptide bonds while still achieving excellent quality reports from MolProbity, leading to concerns that ignoring such errors is allowing software to overfit maps without producing telltale errors in, for example, the Ramachandran plot.
Martingales, nonstationary increments, and the efficient market hypothesis
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.
2008-06-01
We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Endo, Satoshi; Wong, May
Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic substepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic substeps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic substeps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
Portable Wireless LAN Device and Two-way Radio Threat Assessment for Aircraft Navigation Radios
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2003-01-01
Measurement processes, data and analysis are provided to address the concern for Wireless Local Area Network devices and two-way radios to cause electromagnetic interference to aircraft navigation radio systems. A radiated emission measurement process is developed and spurious radiated emissions from various devices are characterized using reverberation chambers. Spurious radiated emissions in aircraft radio frequency bands from several wireless network devices are compared with baseline emissions from standard computer laptops and personal digital assistants. In addition, spurious radiated emission data in aircraft radio frequency bands from seven pairs of two-way radios are provided, A description of the measurement process, device modes of operation and the measurement results are reported. Aircraft interference path loss measurements were conducted on four Boeing 747 and Boeing 737 aircraft for several aircraft radio systems. The measurement approach is described and the path loss results are compared with existing data from reference documents, standards, and NASA partnerships. In-band on-channel interference thresholds are compiled from an existing reference document. Using these data, a risk assessment is provided for interference from wireless network devices and two-way radios to aircraft systems, including Localizer, Glideslope, Very High Frequency Omnidirectional Range, Microwave Landing System and Global Positioning System. The report compares the interference risks associated with emissions from wireless network devices and two-way radios against standard laptops and personal digital assistants. Existing receiver interference threshold references are identified as to require more data for better interference risk assessments.
Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo
2015-01-01
Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.
Finch, Megan L; Passman, Adam M; Strauss, Robyn P; Yeoh, George C; Callus, Bernard A
2015-01-01
The Yes-associated protein (YAP) is a potent transcriptional co-activator that functions as a nuclear effector of the Hippo signaling pathway. YAP is oncogenic and its activity is linked to its cellular abundance and nuclear localisation. Activation of the Hippo pathway restricts YAP nuclear entry via its phosphorylation by Lats kinases and consequent cytoplasmic retention bound to 14-3-3 proteins. We examined YAP expression in liver progenitor cells (LPCs) and surprisingly found that transformed LPCs did not show an increase in YAP abundance compared to the non-transformed LPCs from which they were derived. We then sought to ascertain whether nuclear YAP was more abundant in transformed LPCs. We used an antibody that we confirmed was specific for YAP by immunoblotting to determine YAP's sub-cellular localisation by immunofluorescence. This antibody showed diffuse staining for YAP within the cytosol and nuclei, but, noticeably, it showed intense staining of the nucleoli of LPCs. This staining was non-specific, as shRNA treatment of cells abolished YAP expression to undetectable levels by Western blot yet the nucleolar staining remained. Similar spurious YAP nucleolar staining was also seen in mouse embryonic fibroblasts and mouse liver tissue, indicating that this antibody is unsuitable for immunological applications to determine YAP sub-cellular localisation in mouse cells or tissues. Interestingly nucleolar staining was not evident in D645 cells suggesting the antibody may be suitable for use in human cells. Given the large body of published work on YAP in recent years, many of which utilise this antibody, this study raises concerns regarding its use for determining sub-cellular localisation. From a broader perspective, it serves as a timely reminder of the need to perform appropriate controls to ensure the validity of published data.
Felice, Carmelo J; Madrid, Rossana E; Valentinuzzi, Max E
2005-01-01
Background In Impedance Microbiology, the time during which the measuring equipment is connected to the bipolar cells is rather long, usually between 6 to 24 hrs for microorganisms with duplication times in the order of less than one hour and concentrations ranging from 101 to 107 [CFU/ml]. Under these conditions, the electrode-electrolyte interface impedance may show a slow drift of about 2%/hr. By and large, growth curves superimposed on such drift do not stabilize, are less reproducible, and keep on distorting all over the measurement of the temporal reactive or resistive records due to interface changes, in turn originated in bacterial activity. This problem has been found when growth curves were obtained by means of impedance analyzers or with impedance bridges using different types of operational amplifiers. Methods Suspecting that the input circuitry was the culprit of the deleterious effect, we used for that matter (a) ultra-low bias current amplifiers, (b) isolating relays for the selection of cells, and (c) a shorter connection time, so that the relays were maintained opened after the readings, to bring down such spurious drift to a negligible value. Bacterial growth curves were obtained in order to test their quality. Results It was demonstrated that the drift decreases ten fold when the circuit remained connected to the cell for a short time between measurements, so that the distortion became truly negligible. Improvement due to better-input amplifiers was not as good as by reducing the connection time. Moreover, temperature effects were insignificant with a regulation of ± 0.2 [°C]. Frequency did not influence either. Conclusion The drift originated either at the dc input bias offset current (Ios) of the integrated circuits, or in discrete transistors connected directly to the electrodes immersed in the cells, depending on the particular circuit arrangement. Reduction of the connection time was the best countermeasure. PMID:15796776
Finch, Megan L.; Passman, Adam M.; Strauss, Robyn P.; Yeoh, George C.; Callus, Bernard A.
2015-01-01
The Yes-associated protein (YAP) is a potent transcriptional co-activator that functions as a nuclear effector of the Hippo signaling pathway. YAP is oncogenic and its activity is linked to its cellular abundance and nuclear localisation. Activation of the Hippo pathway restricts YAP nuclear entry via its phosphorylation by Lats kinases and consequent cytoplasmic retention bound to 14-3-3 proteins. We examined YAP expression in liver progenitor cells (LPCs) and surprisingly found that transformed LPCs did not show an increase in YAP abundance compared to the non-transformed LPCs from which they were derived. We then sought to ascertain whether nuclear YAP was more abundant in transformed LPCs. We used an antibody that we confirmed was specific for YAP by immunoblotting to determine YAP’s sub-cellular localisation by immunofluorescence. This antibody showed diffuse staining for YAP within the cytosol and nuclei, but, noticeably, it showed intense staining of the nucleoli of LPCs. This staining was non-specific, as shRNA treatment of cells abolished YAP expression to undetectable levels by Western blot yet the nucleolar staining remained. Similar spurious YAP nucleolar staining was also seen in mouse embryonic fibroblasts and mouse liver tissue, indicating that this antibody is unsuitable for immunological applications to determine YAP sub-cellular localisation in mouse cells or tissues. Interestingly nucleolar staining was not evident in D645 cells suggesting the antibody may be suitable for use in human cells. Given the large body of published work on YAP in recent years, many of which utilise this antibody, this study raises concerns regarding its use for determining sub-cellular localisation. From a broader perspective, it serves as a timely reminder of the need to perform appropriate controls to ensure the validity of published data. PMID:25658431
Tree demography dominates long-term growth trends inferred from tree rings.
Brienen, Roel J W; Gloor, Manuel; Ziv, Guy
2017-02-01
Understanding responses of forests to increasing CO 2 and temperature is an important challenge, but no easy task. Tree rings are increasingly used to study such responses. In a recent study, van der Sleen et al. (2014) Nature Geoscience, 8, 4 used tree rings from 12 tropical tree species and find that despite increases in intrinsic water use efficiency, no growth stimulation is observed. This challenges the idea that increasing CO 2 would stimulate growth. Unfortunately, tree ring analysis can be plagued by biases, resulting in spurious growth trends. While their study evaluated several biases, it does not account for all. In particular, one bias may have seriously affected their results. Several of the species have recruitment patterns, which are not uniform, but clustered around one specific year. This results in spurious negative growth trends if growth rates are calculated in fixed size classes, as 'fast-growing' trees reach the sampling diameter earlier compared to slow growers and thus fast growth rates tend to have earlier calendar dates. We assessed the effect of this 'nonuniform age bias' on observed growth trends and find that van der Sleen's conclusions of a lack of growth stimulation do not hold. Growth trends are - at least partially - driven by underlying recruitment or age distributions. Species with more clustered age distributions show more negative growth trends, and simulations to estimate the effect of species' age distributions show growth trends close to those observed. Re-evaluation of the growth data and correction for the bias result in significant positive growth trends of 1-2% per decade for the full period, and 3-7% since 1950. These observations, however, should be taken cautiously as multiple biases affect these trend estimates. In all, our results highlight that tree ring studies of long-term growth trends can be strongly influenced by biases if demographic processes are not carefully accounted for. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Personal use of hair dyes and the risk of bladder cancer: results of a meta-analysis.
Huncharek, Michael; Kupelnick, Bruce
2005-01-01
OBJECTIVE: This study examined the methodology of observational studies that explored an association between personal use of hair dye products and the risk of bladder cancer. METHODS: Data were pooled from epidemiological studies using a general variance-based meta-analytic method that employed confidence intervals. The outcome of interest was a summary relative risk (RRs) reflecting the risk of bladder cancer development associated with use of hair dye products vs. non-use. Sensitivity analyses were performed to explain any observed statistical heterogeneity and to explore the influence of specific study characteristics of the summary estimate of effect. RESULTS: Initially combining homogenous data from six case-control and one cohort study yielded a non-significant RR of 1.01 (0.92, 1.11), suggesting no association between hair dye use and bladder cancer development. Sensitivity analyses examining the influence of hair dye type, color, and study design on this suspected association showed that uncontrolled confounding and design limitations contributed to a spurious non-significant summary RR. The sensitivity analyses yielded statistically significant RRs ranging from 1.22 (1.11, 1.51) to 1.50 (1.30, 1.98), indicating that personal use of hair dye products increases bladder cancer risk by 22% to 50% vs. non-use. CONCLUSION: The available epidemiological data suggest an association between personal use of hair dye products and increased risk of bladder cancer. PMID:15736329
Perceived Stress and Mortality in a Taiwanese Older Adult Population
Vasunilashorn, Sarinnapha; Glei, Dana A.; Weinstein, Maxine; Goldman, Noreen
2015-01-01
Perceived stress is associated with poor health outcomes including negative affect, increased susceptibility to the common cold, and cardiovascular disease; the consequences of perceived stress for mortality, however, have received less attention. This study characterizes the relationship between perceived stress and 11-year mortality in a population of Taiwanese adults aged 53+. Using the Survey of Health and Living Status of the Near Elderly and Elderly of Taiwan, we calculated a composite measure of perceived stress based on six items pertaining to the health, financial situation, and occupation of the respondents and their families. Proportional hazard models were used to determine whether perceived stress predicted mortality. After adjusting for sociodemographic factors only, we found that a one standard deviation increase in perceived stress was associated with a 19% increase in all-cause mortality risk during the 11-year follow-up period (HR=1.19, 95% CI 1.13–1.26). The relationship was greatly attenuated when perceptions of stress regarding health were excluded, and was not significant after adjusting for medical conditions, mobility limitations, and depressive symptoms. We conclude that the association between perceived stress and mortality is explained by an individual's current health; however, our data do not allow us to distinguish between two possible interpretations of this conclusion: a) the relationship between perceived stress and mortality is spurious, or b) poor health acts as the mediator. PMID:23869432
A low complexity, low spur digital IF conversion circuit for high-fidelity GNSS signal playback
NASA Astrophysics Data System (ADS)
Su, Fei; Ying, Rendong
2016-01-01
A low complexity high efficiency and low spur digital intermediate frequency (IF) conversion circuit is discussed in the paper. This circuit is key element in high-fidelity GNSS signal playback instrument. We analyze the spur performance of a finite state machine (FSM) based numerically controlled oscillators (NCO), by optimization of the control algorithm, a FSM based NCO with 3 quantization stage can achieves 65dB SFDR in the range of the seventh harmonic. Compare with traditional lookup table based NCO design with the same Spurious Free Dynamic Range (SFDR) performance, the logic resource require to implemented the NCO is reduced to 1/3. The proposed design method can be extended to the IF conversion system with good SFDR in the range of higher harmonic components by increasing the quantization stage.
Carbonell, Felix; Bellec, Pierre; Shmuel, Amir
2011-01-01
The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)-based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations.
Auto- and hetero-associative memory using a 2-D optical logic gate
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1989-01-01
An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.
Auto- and hetero-associative memory using a 2-D optical logic gate
NASA Astrophysics Data System (ADS)
Chao, Tien-Hsin
1989-06-01
An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.
Geometrically derived difference formulae for the numerical integration of trajectory problems
NASA Technical Reports Server (NTRS)
Mcleod, R. J. Y.; Sanz-Serna, J. M.
1981-01-01
The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.
Alcohol advertising bans and alcohol abuse.
Young, D J
1993-07-01
Henry Saffer [Saffer (1991) Journal of Health Economics 10, 65-79] concludes that bans on broadcast advertising for alcoholic beverages reduce total alcohol consumption, motor vehicle fatalities, and cirrhosis deaths. A reexamination of his data and procedures reveals a number of flaws. First, there is evidence of reverse causation: countries with low consumption/death rates tend to adopt advertising bans, creating a (spurious) negative correlation between bans and consumption/death rates. Second, even this correlation largely disappears when the estimates are corrected for serial correlation. Third, estimates based on the components of consumption--spirits, beer and wine--mostly indicate that bans are associated with increased consumption.
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
Patorno, Elisabetta; Patrick, Amanda R; Garry, Elizabeth M; Schneeweiss, Sebastian; Gillet, Victoria G; Bartels, Dorothee B; Masso-Gonzalez, Elvira; Seeger, John D
2014-11-01
Recent years have witnessed a growing body of observational literature on the association between glucose-lowering treatments and cardiovascular disease. However, many of the studies are based on designs or analyses that inadequately address the methodological challenges involved. We reviewed recent observational literature on the association between glucose-lowering medications and cardiovascular outcomes and assessed the design and analysis methods used, with a focus on their ability to address specific methodological challenges. We describe and illustrate these methodological issues and their impact on observed associations, providing examples from the reviewed literature. We suggest approaches that may be employed to manage these methodological challenges. From the evaluation of 81 publications of observational investigations assessing the association between glucose-lowering treatments and cardiovascular outcomes, we identified the following methodological challenges: 1) handling of temporality in administrative databases; 2) handling of risks that vary with time and treatment duration; 3) definitions of the exposure risk window; 4) handling of exposures that change over time; and 5) handling of confounding by indication. Most of these methodological challenges may be suitably addressed through application of appropriate methods. Observational research plays an increasingly important role in the evaluation of the clinical effects of diabetes treatment. Implementation of appropriate research methods holds the promise of reducing the potential for spurious findings and the risk that the spurious findings will mislead the medical community about risks and benefits of diabetes medications.
Genovar: a detection and visualization tool for genomic variants.
Jung, Kwang Su; Moon, Sanghoon; Kim, Young Jin; Kim, Bong-Jo; Park, Kiejung
2012-05-08
Along with single nucleotide polymorphisms (SNPs), copy number variation (CNV) is considered an important source of genetic variation associated with disease susceptibility. Despite the importance of CNV, the tools currently available for its analysis often produce false positive results due to limitations such as low resolution of array platforms, platform specificity, and the type of CNV. To resolve this problem, spurious signals must be separated from true signals by visual inspection. None of the previously reported CNV analysis tools support this function and the simultaneous visualization of comparative genomic hybridization arrays (aCGH) and sequence alignment. The purpose of the present study was to develop a useful program for the efficient detection and visualization of CNV regions that enables the manual exclusion of erroneous signals. A JAVA-based stand-alone program called Genovar was developed. To ascertain whether a detected CNV region is a novel variant, Genovar compares the detected CNV regions with previously reported CNV regions using the Database of Genomic Variants (DGV, http://projects.tcag.ca/variation) and the Single Nucleotide Polymorphism Database (dbSNP). The current version of Genovar is capable of visualizing genomic data from sources such as the aCGH data file and sequence alignment format files. Genovar is freely accessible and provides a user-friendly graphic user interface (GUI) to facilitate the detection of CNV regions. The program also provides comprehensive information to help in the elimination of spurious signals by visual inspection, making Genovar a valuable tool for reducing false positive CNV results. http://genovar.sourceforge.net/.
ZEEMAN DOPPLER MAPS: ALWAYS UNIQUE, NEVER SPURIOUS?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stift, Martin J.; Leone, Francesco
Numerical models of atomic diffusion in magnetic atmospheres of ApBp stars predict abundance structures that differ from the empirical maps derived with (Zeeman) Doppler mapping (ZDM). An in-depth analysis of this apparent disagreement investigates the detectability by means of ZDM of a variety of abundance structures, including (warped) rings predicted by theory, but also complex spot-like structures. Even when spectra of high signal-to-noise ratio are available, it can prove difficult or altogether impossible to correctly recover shapes, positions, and abundances of a mere handful of spots, notwithstanding the use of all four Stokes parameters and an exactly known field geometry;more » the recovery of (warped) rings can be equally challenging. Inversions of complex abundance maps that are based on just one or two spectral lines usually permit multiple solutions. It turns out that it can by no means be guaranteed that any of the regularization functions in general use for ZDM (maximum entropy or Tikhonov) will lead to a true abundance map instead of some spurious one. Attention is drawn to the need for a study that would elucidate the relation between the stratified, field-dependent abundance structures predicted by diffusion theory on the one hand, and empirical maps obtained by means of “canonical” ZDM, i.e., with mean atmospheres and unstratified abundances, on the other hand. Finally, we point out difficulties arising from the three-dimensional nature of the atomic diffusion process in magnetic ApBp star atmospheres.« less
Klipstein, P C
2018-07-11
For 2D topological insulators with strong electron-hole hybridization, such as HgTe/CdTe quantum wells, the widely used 4 × 4 k · p Hamiltonian based on the first electron and heavy hole sub-bands yields an equal number of physical and spurious solutions, for both the bulk states and the edge states. For symmetric bands and zero wave vector parallel to the sample edge, the mid-gap bulk solutions are identical to the edge solutions. In all cases, the physical edge solution is exponentially localized to the boundary and has been shown previously to satisfy standard boundary conditions for the wave function and its derivative, even in the limit of an infinite wall potential. The same treatment is now extended to the case of narrow sample widths, where for each spin direction, a gap appears in the edge state dispersions. For widths greater than 200 nm, this gap is less than half of the value reported for open boundary conditions, which are called into question because they include a spurious wave function component. The gap in the edge state dispersions is also calculated for weakly hybridized quantum wells such as InAs/GaSb/AlSb. In contrast to the strongly hybridized case, the edge states at the zone center only have pure exponential character when the bands are symmetric and when the sample has certain characteristic width values.
NASA Astrophysics Data System (ADS)
Israel, Holger; Massey, Richard; Prod'homme, Thibaut; Cropper, Mark; Cordes, Oliver; Gow, Jason; Kohley, Ralf; Marggraf, Ole; Niemi, Sami; Rhodes, Jason; Short, Alex; Verhoeve, Peter
2015-10-01
Radiation damage to space-based charge-coupled device detectors creates defects which result in an increasing charge transfer inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect - and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrades measurements of photometry and morphology. As a concrete application, we simulate 1.5 × 109 `worst-case' galaxy and 1.5 × 108 star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges. If the model used to correct CTI is perfectly the same as that used to add CTI, 99.68 per cent of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets overcorrected during correction. Secondly, if we assume the first issue to be solved, knowledge of the charge trap density within Δρ/ρ = (0.0272 ± 0.0005) per cent and the characteristic release time of the dominant species to be known within Δτ/τ = (0.0400 ± 0.0004) per cent will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.
Avoiding pitfalls in estimating heritability with the common options approach
Danchin, Etienne; Wajnberg, Eric; Wagner, Richard H.
2014-01-01
In many circumstances, heritability estimates are subject to two potentially interacting pitfalls: the spatial and the regression to the mean (RTM) fallacies. The spatial fallacy occurs when the set of potential movement options differs among individuals according to where individuals depart. The RTM fallacy occurs when extreme measurements are followed by measurements that are closer to the mean. We simulated data from the largest published heritability study of a behavioural trait, colony size choice, to examine the operation of the two fallacies. We found that spurious heritabilities are generated under a wide range of conditions both in experimental and correlative estimates of heritability. Classically designed cross-foster experiments can actually increase the frequency of spurious heritabilities. Simulations showed that experiments providing all individuals with the identical set of options, such as by fostering all offspring in the same breeding location, are immune to the two pitfalls. PMID:24865284
NASA Astrophysics Data System (ADS)
Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.
2018-02-01
We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.
Opportunities for shear energy scaling in bulk acoustic wave resonators.
Jose, Sumy; Hueting, Raymond J E
2014-10-01
An important energy loss contribution in bulk acoustic wave resonators is formed by so-called shear waves, which are transversal waves that propagate vertically through the devices with a horizontal motion. In this work, we report for the first time scaling of the shear-confined spots, i.e., spots containing a high concentration of shear wave displacement, controlled by the frame region width at the edge of the resonator. We also demonstrate a novel methodology to arrive at an optimum frame region width for spurious mode suppression and shear wave confinement. This methodology makes use of dispersion curves obtained from finite-element method (FEM) eigenfrequency simulations for arriving at an optimum frame region width. The frame region optimization is demonstrated for solidly mounted resonators employing several shear wave optimized reflector stacks. Finally, the FEM simulation results are compared with measurements for resonators with Ta2O5/ SiO2 stacks showing suppression of the spurious modes.
Robust Statistical Detection of Power-Law Cross-Correlation.
Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert
2016-06-02
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.
Radio Frequency Compatibility of an RFID Tag on Glideslope Navigation Receivers
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Mielnik, John J.
2008-01-01
A process is demonstrated to show compatibility between a radio frequency identification (RFID) tag and an aircraft glideslope (GS) radio r eceiver. The particular tag chosen was previously shown to have significant spurious emission levels that exceeded the emission limit in th e GS aeronautical band. The spurious emissions are emulated in the study by capturing the RFID fundamental transmission and playing back th e signal in the GS band. The signal capturing and playback are achiev ed with a vector signal generator and a spectrum analyzer that can output the in-phase and quadrature components (IQ). The simulated interf erence signal is combined with a GS signal before being injected into a GS receiver#s antenna port for interference threshold determination . Minimum desired propagation loss values to avoid interference are then computed and compared against actual propagation losses for severa l aircraft.
Robust Statistical Detection of Power-Law Cross-Correlation
Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert
2016-01-01
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630
Computational Aeroacoustics: An Overview
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
2003-01-01
An overview of recent advances in computational aeroacoustics (CAA) is presented. CAA algorithms must not be dispersive and dissipative. It should propagate waves supported by the Euler equations with the correct group velocities. Computation domains are inevitably finite in size. To avoid the reflection of acoustic and other outgoing waves at the boundaries of the computation domain, it is required that special boundary conditions be imposed at the boundary region. These boundary conditions either absorb all the outgoing waves without reflection or allow the waves to exit smoothly. High-order schemes, invariably, supports spurious short waves. These spurious waves tend to pollute the numerical solution. They must be selectively damped or filtered out. All these issues and relevant computation methods are briefly reviewed. Jet screech tones are known to have caused structural fatigue in military combat aircrafts. Numerical simulation of the jet screech phenomenon is presented as an example of a successful application of CAA.
Bulk Genotyping of Biopsies Can Create Spurious Evidence for Hetereogeneity in Mutation Content.
Kostadinov, Rumen; Maley, Carlo C; Kuhner, Mary K
2016-04-01
When multiple samples are taken from the neoplastic tissues of a single patient, it is natural to compare their mutation content. This is often done by bulk genotyping of whole biopsies, but the chance that a mutation will be detected in bulk genotyping depends on its local frequency in the sample. When the underlying mutation count per cell is equal, homogenous biopsies will have more high-frequency mutations, and thus more detectable mutations, than heterogeneous ones. Using simulations, we show that bulk genotyping of data simulated under a neutral model of somatic evolution generates strong spurious evidence for non-neutrality, because the pattern of tissue growth systematically generates differences in biopsy heterogeneity. Any experiment which compares mutation content across bulk-genotyped biopsies may therefore suggest mutation rate or selection intensity variation even when these forces are absent. We discuss computational and experimental approaches for resolving this problem.
Challenges of Big Data Analysis.
Fan, Jianqing; Han, Fang; Liu, Han
2014-06-01
Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.
Disease Ecology, Biodiversity, and the Latitudinal Gradient in Income
Bonds, Matthew H.; Dobson, Andrew P.; Keenan, Donald C.
2012-01-01
While most of the world is thought to be on long-term economic growth paths, more than one-sixth of the world is roughly as poor today as their ancestors were hundreds of years ago. The majority of the extremely poor live in the tropics. The latitudinal gradient in income is highly suggestive of underlying biophysical drivers, of which disease conditions are an especially salient example. However, conclusions have been confounded by the simultaneous causality between income and disease, in addition to potentially spurious relationships. We use a simultaneous equations model to estimate the relative effects of vector-borne and parasitic diseases (VBPDs) and income on each other, controlling for other factors. Our statistical model indicates that VBPDs have systematically affected economic development, evident in contemporary levels of per capita income. The burden of VBDPs is, in turn, determined by underlying ecological conditions. In particular, the model predicts it to rise as biodiversity falls. Through these positive effects on human health, the model thus identifies measurable economic benefits of biodiversity. PMID:23300379
Ensemble analyses improve signatures of tumour hypoxia and reveal inter-platform differences
2014-01-01
Background The reproducibility of transcriptomic biomarkers across datasets remains poor, limiting clinical application. We and others have suggested that this is in-part caused by differential error-structure between datasets, and their incomplete removal by pre-processing algorithms. Methods To test this hypothesis, we systematically assessed the effects of pre-processing on biomarker classification using 24 different pre-processing methods and 15 distinct signatures of tumour hypoxia in 10 datasets (2,143 patients). Results We confirm strong pre-processing effects for all datasets and signatures, and find that these differ between microarray versions. Importantly, exploiting different pre-processing techniques in an ensemble technique improved classification for a majority of signatures. Conclusions Assessing biomarkers using an ensemble of pre-processing techniques shows clear value across multiple diseases, datasets and biomarkers. Importantly, ensemble classification improves biomarkers with initially good results but does not result in spuriously improved performance for poor biomarkers. While further research is required, this approach has the potential to become a standard for transcriptomic biomarkers. PMID:24902696
On the Mathematical Consequences of Binning Spike Trains.
Cessac, Bruno; Le Ny, Arnaud; Löcherbach, Eva
2017-01-01
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, that is, to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future, and we discuss how binning may affect our conclusions on this ability. We finally comment on the possible consequences of binning in the detection of spurious phase transitions or in the detection of incorrect evidence of criticality.
Challenges of Big Data Analysis
Fan, Jianqing; Han, Fang; Liu, Han
2014-01-01
Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469
In defence of pedagogy: a critique of the notion of andragogy.
Darbyshire, P
1993-10-01
Malcolm Knowles' theory of andragogy has gained increasing acceptance among nurse educators. Andragogy is espoused as a progressive educational theory, adopted as a theoretical underpinning for curricula and is even considered to be synonymous with a variety of teaching techniques and strategies such as 'problem-based' and 'self-directed' learning. This paper offers a critique of the notion of andragogy which maintains that the distinction created between andragogy and pedagogy is spurious and based upon assumptions which are untenable. It is argued that andragogy has been uncritically accepted within nursing education in much the same way that the nursing process and models of nursing were in their day. Finally, it is claimed that true pedagogy has far more radical, powerful and transformative possibilities for nursing education.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
A resolvable subfilter-scale model specific to large-eddy simulation of under-resolved turbulence
NASA Astrophysics Data System (ADS)
Zhou, Yong; Brasseur, James G.; Juneja, Anurag
2001-09-01
Large-eddy simulation (LES) of boundary-layer flows has serious deficiencies near the surface when a viscous sublayer either does not exist (rough walls) or is not practical to resolve (high Reynolds numbers). In previous work, we have shown that the near-surface errors arise from the poor performance of algebraic subfilter-scale (SFS) models at the first several grid levels, where integral scales are necessarily under-resolved and the turbulence is highly anisotropic. In under-resolved turbulence, eddy viscosity and similarity SFS models create a spurious feedback loop between predicted resolved-scale (RS) velocity and modeled SFS acceleration, and are unable to simultaneously capture SFS acceleration and RS-SFS energy flux. To break the spurious coupling in a dynamically meaningful manner, we introduce a new modeling strategy in which the grid-resolved subfilter velocity is estimated from a separate dynamical equation containing the essential inertial interactions between SFS and RS velocity. This resolved SFS (RSFS) velocity is then used as a surrogate for the complete SFS velocity in the SFS stress tensor. We test the RSFS model by comparing LES of highly under-resolved anisotropic buoyancy-generated homogeneous turbulence with a corresponding direct numerical simulation (DNS). The new model successfully suppresses the spurious feedback loop between RS velocity and SFS acceleration, and greatly improves model predictions of the anisotropic structure of SFS acceleration and resolved velocity fields. Unlike algebraic models, the RSFS model accurately captures SFS acceleration intensity and RS-SFS energy flux, even during the nonequilibrium transient, and properly partitions SFS acceleration between SFS stress divergence and SFS pressure force.
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note.
Burgess, Adrian P
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated.
On the estimation of phase synchronization, spurious synchronization and filtering
NASA Astrophysics Data System (ADS)
Rios Herrera, Wady A.; Escalona, Joaquín; Rivera López, Daniel; Müller, Markus F.
2016-12-01
Phase synchronization, viz., the adjustment of instantaneous frequencies of two interacting self-sustained nonlinear oscillators, is frequently used for the detection of a possible interrelationship between empirical data recordings. In this context, the proper estimation of the instantaneous phase from a time series is a crucial aspect. The probability that numerical estimates provide a physically relevant meaning depends sensitively on the shape of its power spectral density. For this purpose, the power spectrum should be narrow banded possessing only one prominent peak [M. Chavez et al., J. Neurosci. Methods 154, 149 (2006)]. If this condition is not fulfilled, band-pass filtering seems to be the adequate technique in order to pre-process data for a posterior synchronization analysis. However, it was reported that band-pass filtering might induce spurious synchronization [L. Xu et al., Phys. Rev. E 73, 065201(R), (2006); J. Sun et al., Phys. Rev. E 77, 046213 (2008); and J. Wang and Z. Liu, EPL 102, 10003 (2013)], a statement that without further specification causes uncertainty over all measures that aim to quantify phase synchronization of broadband field data. We show by using signals derived from different test frameworks that appropriate filtering does not induce spurious synchronization. Instead, filtering in the time domain tends to wash out existent phase interrelations between signals. Furthermore, we show that measures derived for the estimation of phase synchronization like the mean phase coherence are also useful for the detection of interrelations between time series, which are not necessarily derived from coupled self-sustained nonlinear oscillators.
Duplex Interrogation by a Direct DNA Repair Protein in Search of Base Damage
Yi, Chengqi; Chen, Baoen; Qi, Bo; Zhang, Wen; Jia, Guifang; Zhang, Liang; Li, Charles J.; Dinner, Aaron R.; Yang, Cai-Guang; He, Chuan
2012-01-01
ALKBH2 is a direct DNA repair dioxygenase guarding mammalian genome against N1-methyladenine, N3-methylcytosine, and 1,N6-ethenoadenine damage. A prerequisite for repair is to identify these lesions in the genome. Here we present crystal structures of ALKBH2 bound to different duplex DNAs. Together with computational and biochemical analyses, our results suggest that DNA interrogation by ALKBH2 displays two novel features: i) ALKBH2 probes base-pair stability and detects base pairs with reduced stability; ii) ALKBH2 does not have nor need a “damage-checking site”, which is critical for preventing spurious base-cleavage for several glycosylases. The demethylation mechanism of ALKBH2 insures that only cognate lesions are oxidized and reversed to normal bases, and that a flipped, non-substrate base remains intact in the active site. Overall, the combination of duplex interrogation and oxidation chemistry allows ALKBH2 to detect and process diverse lesions efficiently and correctly. PMID:22659876
A monolithic Lagrangian approach for fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Ryzhakov, P. B.; Rossi, R.; Idelsohn, S. R.; Oñate, E.
2010-11-01
Current work presents a monolithic method for the solution of fluid-structure interaction problems involving flexible structures and free-surface flows. The technique presented is based upon the utilization of a Lagrangian description for both the fluid and the structure. A linear displacement-pressure interpolation pair is used for the fluid whereas the structure utilizes a standard displacement-based formulation. A slight fluid compressibility is assumed that allows to relate the mechanical pressure to the local volume variation. The method described features a global pressure condensation which in turn enables the definition of a purely displacement-based linear system of equations. A matrix-free technique is used for the solution of such linear system, leading to an efficient implementation. The result is a robust method which allows dealing with FSI problems involving arbitrary variations in the shape of the fluid domain. The method is completely free of spurious added-mass effects.
Optimizing the specificity of nucleic acid hybridization.
Zhang, David Yu; Chen, Sherry Xi; Yin, Peng
2012-01-22
The specific hybridization of complementary sequences is an essential property of nucleic acids, enabling diverse biological and biotechnological reactions and functions. However, the specificity of nucleic acid hybridization is compromised for long strands, except near the melting temperature. Here, we analytically derived the thermodynamic properties of a hybridization probe that would enable near-optimal single-base discrimination and perform robustly across diverse temperature, salt and concentration conditions. We rationally designed 'toehold exchange' probes that approximate these properties, and comprehensively tested them against five different DNA targets and 55 spurious analogues with energetically representative single-base changes (replacements, deletions and insertions). These probes produced discrimination factors between 3 and 100+ (median, 26). Without retuning, our probes function robustly from 10 °C to 37 °C, from 1 mM Mg(2+) to 47 mM Mg(2+), and with nucleic acid concentrations from 1 nM to 5 µM. Experiments with RNA also showed effective single-base change discrimination.
NASA Astrophysics Data System (ADS)
Miyake, Y.; Cully, C. M.; Usui, H.; Nakashima, H.
2013-12-01
In order to increase accuracy and reliability of in-situ measurements made by scientific spacecraft, it is imperative to develop comprehensive understanding of spacecraft-plasma interactions. In space environments, not only the spacecraft charging but also surrounding plasma disturbances such as caused by the wake formation may interfere directly with in-situ measurements. The self-consistent solutions of such phenomena are necessary to assess their effects on scientific spacecraft systems. As our recent activity, we work on the modeling and simulations of Cluster double-probe instrument in tenuous and cold streaming plasmas [1]. Double-probe electric field sensors are often deployed using wire booms with radii much less than typical Debye lengths of magnetospheric plasmas (millimeters compared to tens of meters). However, in tenuous and cold streaming plasmas seen in the polar cap and lobe regions, the wire booms have a high positive potential due to photoelectron emission and can strongly scatter approaching ions. Consequently, an electrostatic wake formed behind the spacecraft is further enhanced by the presence of the wire booms. We reproduce this process for the case of the Cluster satellite by performing plasma particle-in-cell (PIC) simulations [2], which include the effects of both the spacecraft body and the wire booms in a simultaneous manner, on modern supercomputers. The simulations reveal that the effective thickness of the booms for the Cluster Electric Field and Wave (EFW) instrument is magnified from its real thickness (2.2 millimeters) to several meters, when the spacecraft potential is at 30-40 volts. Such booms enhance the wake electric field magnitude by a factor of about 2 depending on the spacecraft potential, and play a principal role in explaining the in situ Cluster EFW data showing sinusoidal spurious electric fields of about 10 mV/m amplitudes. The boom effects are quantified by comparing PIC simulations with and without wire booms. The paper also reports some recent progress of ongoing PIC simulation research that focuses on spurious electric field generation in subsonic ion flows. Our preliminary simulation results revealed that; (1) there is no apparent wake signature behind the spacecraft in such a condition, but (2) spurious electric field over 1 mV/m amplitude is observed in the direction of the flow vector. The observed field amplitude is sometimes comparable to the convection electric field (a few mV/m) associated with the flow. Our analysis also confirmed that the spurious field is caused by a weakly-asymmetric potential pattern created by the ion flow. We will present the parametric study of such spurious fields for various conditions of plasma flows. [References] [1] Miyake, Y., C. M. Cully, H. Usui, and H. Nakashima (2013), Plasma particle simulations of wake formation behind a spacecraft with thin wire booms, submitted to J. Geophys. Res. [2] Miyake, Y., and H. Usui (2009), New electromagnetic particle simulation code for the analysis of spacecraft-plasma interactions, Phys. Plasmas, 16, 062904, doi:10.1063/1.3147922.
Optimized norm-conserving Hartree-Fock pseudopotentials
NASA Astrophysics Data System (ADS)
Walter, Eric J.; Al-Saidi, Wissam A.
2006-03-01
We report soft Hartree-Fock based pseudopotentials obtained using the optimized pseudopotential method. The spurious long range tail due to the non locality of the exchange potential is removed using a self-consistent damping mechanism as employed in exact exchange and recent Hartree-Fock pseudopotentials. The binding energies of several dimers computed using these pseudopotentials within a planewave Hartree-Fock code show good agreement with all-electron results. A. M. Rappe, K. M. Rabe, E. Kaxiras, and J. D. Joannopoulos, Phys. Rev. B 41, 1227 (1990). E. Engel, A. Höck, R. N. Schmid, R. M. Dreizler, and N. Chetty, Phys. Rev. B 64, 125111 (2001). J.R. Trail and R. J. Needs, J. Chem. Phys. 122, 014112 (2005).
Highly linear ring modulator from hybrid silicon and lithium niobate.
Chen, Li; Chen, Jiahong; Nagy, Jonathan; Reano, Ronald M
2015-05-18
We present a highly linear ring modulator from the bonding of ion-sliced x-cut lithium niobate onto a silicon ring resonator. The third order intermodulation distortion spurious free dynamic range is measured to be 98.1 dB Hz(2/3) and 87.6 dB Hz(2/3) at 1 GHz and 10 GHz, respectively. The linearity is comparable to a reference lithium niobate Mach-Zehnder interferometer modulator operating at quadrature and over an order of magnitude greater than silicon ring modulators based on plasma dispersion effect. Compact modulators for analog optical links that exploit the second order susceptibility of lithium niobate on the silicon platform are envisioned.
NASA Astrophysics Data System (ADS)
Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus
2016-04-01
We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.
NASA Astrophysics Data System (ADS)
Haghani Hassan Abadi, Reza; Fakhari, Abbas; Rahimian, Mohammad Hassan
2018-03-01
In this paper, we propose a multiphase lattice Boltzmann model for numerical simulation of ternary flows at high density and viscosity ratios free from spurious velocities. The proposed scheme, which is based on the phase-field modeling, employs the Cahn-Hilliard theory to track the interfaces among three different fluid components. Several benchmarks, such as the spreading of a liquid lens, binary droplets, and head-on collision of two droplets in binary- and ternary-fluid systems, are conducted to assess the reliability and accuracy of the model. The proposed model can successfully simulate both partial and total spreadings while reducing the parasitic currents to the machine precision.
A STUDY OF THE INDIGOGENIC PRINCIPLE AND IN VITRO MACROPHAGE DIFFERENTIATION
and beta- glucuronidase activities. Moreover, there was a progressive increase in the densities of enzyme reactive centers. Indigo reaction product was...not observed over nuclei; lipid droplets and cell background were free from spurious precipitations. Both galactosidase and glucuronidase were
Broadband active electrically small superconductor antennas
NASA Astrophysics Data System (ADS)
Kornev, V. K.; Kolotinskiy, N. V.; Sharafiev, A. V.; Soloviev, I. I.; Mukhanov, O. A.
2017-10-01
A new type of broadband active electrically small antenna (ESA) based on superconducting quantum arrays (SQAs) has been proposed and developed. These antennas are capable of providing both sensing and amplification of broadband electromagnetic signals with a very high spurious-free dynamic range (SFDR)—up to 100 dB (and even more)—with high sensitivity. The frequency band can range up to tens of gigahertz, depending on Josephson junction characteristic frequency, set by fabrication. In this paper we review theoretical and experimental studies of SQAs and SQA-based antenna prototypes of both transformer and transformer-less types. The ESA prototypes evaluated were fabricated using a standard Nb process with critical current density 4.5 kA cm-2. Measured device characteristics, design issues and comparative analysis of various ESA types, as well as requirements for interfaces, are reviewed and discussed.
Wilson, Jesse W.; Park, Jong Kang; Warren, Warren S.
2015-01-01
The lock-in amplifier is a critical component in many different types of experiments, because of its ability to reduce spurious or environmental noise components by restricting detection to a single frequency and phase. One example application is pump-probe microscopy, a multiphoton technique that leverages excited-state dynamics for imaging contrast. With this application in mind, we present here the design and implementation of a high-speed lock-in amplifier on the field-programmable gate array (FPGA) coprocessor of a data acquisition board. The most important advantage is the inherent ability to filter signals based on more complex modulation patterns. As an example, we use the flexibility of the FPGA approach to enable a novel pump-probe detection scheme based on spread-spectrum communications techniques. PMID:25832238
Wilson, Jesse W; Park, Jong Kang; Warren, Warren S; Fischer, Martin C
2015-03-01
The lock-in amplifier is a critical component in many different types of experiments, because of its ability to reduce spurious or environmental noise components by restricting detection to a single frequency and phase. One example application is pump-probe microscopy, a multiphoton technique that leverages excited-state dynamics for imaging contrast. With this application in mind, we present here the design and implementation of a high-speed lock-in amplifier on the field-programmable gate array (FPGA) coprocessor of a data acquisition board. The most important advantage is the inherent ability to filter signals based on more complex modulation patterns. As an example, we use the flexibility of the FPGA approach to enable a novel pump-probe detection scheme based on spread-spectrum communications techniques.
Beam based measurement of beam position monitor electrode gains
NASA Astrophysics Data System (ADS)
Rubin, D. L.; Billing, M.; Meller, R.; Palmer, M.; Rendina, M.; Rider, N.; Sagan, D.; Shanks, J.; Strohman, C.
2010-09-01
Low emittance tuning at the Cornell Electron Storage Ring (CESR) test accelerator depends on precision measurement of vertical dispersion and transverse coupling. The CESR beam position monitors (BPMs) consist of four button electrodes, instrumented with electronics that allow acquisition of turn-by-turn data. The response to the beam will vary among the four electrodes due to differences in electronic gain and/or misalignment. This variation in the response of the BPM electrodes will couple real horizontal offset to apparent vertical position, and introduce spurious measurements of coupling and vertical dispersion. To alleviate this systematic effect, a beam based technique to measure the relative response of the four electrodes has been developed. With typical CESR parameters, simulations show that turn-by-turn BPM data can be used to determine electrode gains to within ˜0.1%.
An empirical study using permutation-based resampling in meta-regression
2012-01-01
Background In meta-regression, as the number of trials in the analyses decreases, the risk of false positives or false negatives increases. This is partly due to the assumption of normality that may not hold in small samples. Creation of a distribution from the observed trials using permutation methods to calculate P values may allow for less spurious findings. Permutation has not been empirically tested in meta-regression. The objective of this study was to perform an empirical investigation to explore the differences in results for meta-analyses on a small number of trials using standard large sample approaches verses permutation-based methods for meta-regression. Methods We isolated a sample of randomized controlled clinical trials (RCTs) for interventions that have a small number of trials (herbal medicine trials). Trials were then grouped by herbal species and condition and assessed for methodological quality using the Jadad scale, and data were extracted for each outcome. Finally, we performed meta-analyses on the primary outcome of each group of trials and meta-regression for methodological quality subgroups within each meta-analysis. We used large sample methods and permutation methods in our meta-regression modeling. We then compared final models and final P values between methods. Results We collected 110 trials across 5 intervention/outcome pairings and 5 to 10 trials per covariate. When applying large sample methods and permutation-based methods in our backwards stepwise regression the covariates in the final models were identical in all cases. The P values for the covariates in the final model were larger in 78% (7/9) of the cases for permutation and identical for 22% (2/9) of the cases. Conclusions We present empirical evidence that permutation-based resampling may not change final models when using backwards stepwise regression, but may increase P values in meta-regression of multiple covariates for relatively small amount of trials. PMID:22587815
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note
Burgess, Adrian P.
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. PMID:24399948
Shallow marine cloud topped boundary layer in atmospheric models
NASA Astrophysics Data System (ADS)
Janjic, Zavisa
2017-04-01
A common problem in many atmospheric models is excessive expansion over cold water of shallow marine planetary boundary layer (PBL) topped by a thin cloud layer. This phenomenon is often accompanied by spurious light precipitation. The "Cloud Top Entrainment Instability" (CTEI) was proposed as an explanation of the mechanism controlling this process in reality thereby preventing spurious enlargement of the cloudy area and widely spread light precipitation observed in the models. A key element of this hypothesis is evaporative cooling at the PBL top. However, the CTEI hypothesis remains controversial. For example, a recent direct simulation experiment indicated that the evaporative cooling couldn't explain the break-up of the cloudiness as hypothesized by the CTEI. Here, it is shown that the cloud break-up can be achieved in numerical models by a further modification of the nonsingular implementation of the Mellor-Yamada Level 2.5 turbulence closure model (MYJ) developed at the National Centers for Environmental Prediction (NCEP) Washington. Namely, the impact of moist convective instability is included into the turbulent energy production/dissipation equation if (a) the stratification is stable, (b) the lifting condensation level (LCL) for a particle starting at a model level is below the next upper model level, and (c) there is enough turbulent kinetic energy so that, due to random vertical turbulent motions, a particle starting from a model level can reach its LCL. The criterion (c) should be sufficiently restrictive because otherwise the cloud cover can be completely removed. A real data example will be shown demonstrating the ability of the method to break the spurious cloud cover during the day, but also to allow its recovery over night.
Modeling of Shallow Marine Cloud Topped Boundary Layer
NASA Astrophysics Data System (ADS)
Janjic, Z.
2017-12-01
A common problem in many atmospheric models is excessive expansion over cold water of shallow marine planetary boundary layer (PBL) topped by a thin cloud layer. This phenomenon is often accompanied by spurious light precipitation. The "Cloud Top Entrainment Instability" (CTEI) was proposed as an explanation of the mechanism controlling this process and thus preventing spurious enlargement of the cloudy area and widely spread light precipitation observed in the models. A key element of this hypothesis is evaporative cooling at the PBL top. However, the CTEI hypothesis remains controversial. For example, a recent direct simulation experiment indicated that the evaporative cooling couldn't explain the break-up of the cloudiness as hypothesized by the CTEI. Here, it is shown that the cloud break-up can be achieved in numerical models by a further modification of the nonsingular implementation of the nonsingular Mellor-Yamada Level 2.5 turbulence closure model (MYJ) developed at the National Centers for Environmental Prediction (NCEP) Washington. Namely, the impact of moist convective instability is included into the turbulent energy production/dissipation equation if (a) the stratification is stable, (b) the lifting condensation level (LCL) for a particle starting at a model level is below the next upper model level, and (c) there is enough turbulent kinetic energy so that, due to random vertical turbulent motions, a particle starting from a model level can reach its LCL. The criterion (c) should be sufficiently restrictive because otherwise the cloud cover can be completely removed. A real data example will be shown demonstrating the ability of the method to break the spurious cloud cover during the day, but also to allow its recovery over night.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.
Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.
On the nature and correction of the spurious S-wise spiral galaxy winding bias in Galaxy Zoo 1
NASA Astrophysics Data System (ADS)
Hayes, Wayne B.; Davis, Darren; Silva, Pedro
2017-04-01
The Galaxy Zoo 1 catalogue displays a bias towards the S-wise winding direction in spiral galaxies, which has yet to be explained. The lack of an explanation confounds our attempts to verify the Cosmological Principle, and has spurred some debate as to whether a bias exists in the real Universe. The bias manifests not only in the obvious case of trying to decide if the universe as a whole has a winding bias, but also in the more insidious case of selecting which Galaxies to include in a winding direction survey. While the former bias has been accounted for in a previous image-mirroring study, the latter has not. Furthermore, the bias has never been corrected in the GZ1 catalogue, as only a small sample of the GZ1 catalogue was reexamined during the mirror study. We show that the existing bias is a human selection effect rather than a human chirality bias. In effect, the excess S-wise votes are spuriously 'stolen' from the elliptical and edge-on-disc categories, not the Z-wise category. Thus, when selecting a set of spiral galaxies by imposing a threshold T so that max (PS, PZ) > T or PS + PZ > T, we spuriously select more S-wise than Z-wise galaxies. We show that when a provably unbiased machine selects which galaxies are spirals independent of their chirality, the S-wise surplus vanishes, even if humans still determine the chirality. Thus, when viewed across the entire GZ1 sample (and by implication, the Sloan catalogue), the winding direction of arms in spiral galaxies as viewed from Earth is consistent with the flip of a fair coin.
Reducing orbital eccentricity of precessing black-hole binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buonanno, Alessandra; Taracchini, Andrea; Kidder, Lawrence E.
2011-05-15
Building initial conditions for generic binary black-hole evolutions which are not affected by initial spurious eccentricity remains a challenge for numerical-relativity simulations. This problem can be overcome by applying an eccentricity-removal procedure which consists of evolving the binary black hole for a couple of orbits, estimating the resulting eccentricity, and then restarting the simulation with corrected initial conditions. The presence of spins can complicate this procedure. As predicted by post-Newtonian theory, spin-spin interactions and precession prevent the binary from moving along an adiabatic sequence of spherical orbits, inducing oscillations in the radial separation and in the orbital frequency. For single-spinmore » binary black holes these oscillations are a direct consequence of monopole-quadrupole interactions. However, spin-induced oscillations occur at approximately twice the orbital frequency, and therefore can be distinguished and disentangled from the initial spurious eccentricity which occurs at approximately the orbital frequency. Taking this into account, we develop a new eccentricity-removal procedure based on the derivative of the orbital frequency and find that it is rather successful in reducing the eccentricity measured in the orbital frequency to values less than 10{sup -4} when moderate spins are present. We test this new procedure using numerical-relativity simulations of binary black holes with mass ratios 1.5 and 3, spin magnitude 0.5, and various spin orientations. The numerical simulations exhibit spin-induced oscillations in the dynamics at approximately twice the orbital frequency. Oscillations of similar frequency are also visible in the gravitational-wave phase and frequency of the dominant l=2, m=2 mode.« less
A balanced Kalman filter ocean data assimilation system with application to the South Australian Sea
NASA Astrophysics Data System (ADS)
Li, Yi; Toumi, Ralf
2017-08-01
In this paper, an Ensemble Kalman Filter (EnKF) based regional ocean data assimilation system has been developed and applied to the South Australian Sea. This system consists of the data assimilation algorithm provided by the NCAR Data Assimilation Research Testbed (DART) and the Regional Ocean Modelling System (ROMS). We describe the first implementation of the physical balance operator (temperature-salinity, hydrostatic and geostrophic balance) to DART, to reduce the spurious waves which may be introduced during the data assimilation process. The effect of the balance operator is validated in both an idealised shallow water model and the ROMS model real case study. In the shallow water model, the geostrophic balance operator eliminates spurious ageostrophic waves and produces a better sea surface height (SSH) and velocity analysis and forecast. Its impact increases as the sea surface height and wind stress increase. In the real case, satellite-observed sea surface temperature (SST) and SSH are assimilated in the South Australian Sea with 50 ensembles using the Ensemble Adjustment Kalman Filter (EAKF). Assimilating SSH and SST enhances the estimation of SSH and SST in the entire domain, respectively. Assimilation with the balance operator produces a more realistic simulation of surface currents and subsurface temperature profile. The best improvement is obtained when only SSH is assimilated with the balance operator. A case study with a storm suggests that the benefit of the balance operator is of particular importance under high wind stress conditions. Implementing the balance operator could be a general benefit to ocean data assimilation systems.
Hospital comorbidity bias and the concept of schizophrenia.
Bak, Maarten; Drukker, Marjan; van Os, Jim; Delespaul, Philippe
2005-10-01
The comorbidity bias predicts that if disease definition is based on observations of patients in the hospital, spurious comorbidity of psychopathological dimensions that increase the probability of hospital admission will be included in the disease concept, whereas comorbid dimensions that are not associated with admission will be excluded. The direction of any dimensional comorbidity bias in psychotic illness was assessed in a longitudinal analysis of the psychopathology of patients assessed both inside and outside the hospital. Four hundred and eighty patients with broadly defined psychotic disorders were assessed between one and nine times (median two times) over a 5-year period with, amongst others, the Brief Psychiatric Rating Scale. Dimensional comorbidities between positive symptoms, negative symptoms, depression/anxiety, and manic excitement were compared, in addition to their associations with current and future admission status. Higher levels of psychopathology in all symptom domains were associated with both current and future hospital admissions. Associations between the positive, negative, and manic symptom domains were higher for patients in the hospital than for patients outside the hospital, in particular, between positive symptoms and manic excitement (beta=0.28, p<0.001). However, associations between depression and other symptom domains were higher in out-patients as compared to in-patients (positive symptoms and depression, beta=-0.26; p<0.002). The current analyses suggest that, to the extent that disease concepts of psychosis do not take into account effects of dimensional comorbidity biases occasioned by differential psychopathology according to treatment setting, "florid" psychotic psychopathology may be overrepresented, whereas depressive symptoms may be spuriously excluded.
Magnetic fields in central stars of planetary nebulae?
NASA Astrophysics Data System (ADS)
Jordan, S.; Bagnulo, S.; Werner, K.; O'Toole, S. J.
2012-06-01
Context. Most planetary nebulae have bipolar or other non-spherically symmetric shapes. Magnetic fields in the central star may be responsible for this lack of symmetry, but observational studies published to date have reported contradictory results. Aims: We search for correlations between a magnetic field and departures from the spherical geometry of the envelopes of planetary nebulae. Methods: We determine the magnetic fields from spectropolarimetric observations of ten central stars of planetary nebulae. The results of the analysis of the observations of four stars were previously presented and discussed in the literature, while the observations of six stars, plus additional measurements of a star previously observed, are presented here for the first time. Results: All our determinations of magnetic field in the central planetary nebulae are consistent with null results. Our field measurements have a typical error bar of 150-300 G. Previous spurious field detections using data acquired with FORS1 (FOcal Reducer and low dispersion Spectrograph) of the Unit Telescope 1 (UT1) of the Very Large Telescope (VLT) were probably due to the use of different wavelength calibration solutions for frames obtained at different position angles of the retarder waveplate. Conclusions: There is currently no observational evidence of magnetic fields with a strength of the order of hundreds Gauss or higher in the central stars of planetary nebulae. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, under programme ID 072.D-0089 (PI = Jordan) and 075.D-0289 (PI = Jordan).
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...
Self-shielding printed circuit boards for high frequency amplifiers and transmitters
NASA Technical Reports Server (NTRS)
Galvin, D.
1969-01-01
Printed circuit boards retaining as much copper as possible provide electromagnetic shielding between stages of the high frequency amplifiers and transmitters. Oscillation is prevented, spurious output signals are reduced, and multiple stages are kept isolated from each other, both thermally and electrically.
Assessing Spurious Interaction Effects in Structural Equation Modeling
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Weiss, Brandi A.; Li, Ming
2015-01-01
Several studies have stressed the importance of simultaneously estimating interaction and quadratic effects in multiple regression analyses, even if theory only suggests an interaction effect should be present. Specifically, past studies suggested that failing to simultaneously include quadratic effects when testing for interaction effects could…
NASA Astrophysics Data System (ADS)
Mahéo, Laurent; Grolleau, Vincent; Rio, Gérard
2009-11-01
To deal with dynamic and wave propagation problems, dissipative methods are often used to reduce the effects of the spurious oscillations induced by the spatial and time discretization procedures. Among the many dissipative methods available, the Tchamwa-Wielgosz (TW) explicit scheme is particularly useful because it damps out the spurious oscillations occurring in the highest frequency domain. The theoretical study performed here shows that the TW scheme is decentered to the right, and that the damping can be attributed to a nodal displacement perturbation. The FEM study carried out using instantaneous 1-D and 3-D compression loads shows that it is useful to display the damping versus the number of time steps in order to obtain a constant damping efficiency whatever the size of element used for the regular meshing. A study on the responses obtained with irregular meshes shows that the TW scheme is only slightly sensitive to the spatial discretization procedure used. To cite this article: L. Mahéo et al., C. R. Mecanique 337 (2009).
Evaluation of Mobile Phone Interference With Aircraft GPS Navigation Systems
NASA Technical Reports Server (NTRS)
Pace, Scott; Oria, A. J.; Guckian, Paul; Nguyen, Truong X.
2004-01-01
This report compiles and analyzes tests that were conducted to measure cell phone spurious emissions in the Global Positioning System (GPS) radio frequency band that could affect the navigation system of an aircraft. The cell phone in question had, as reported to the FAA (Federal Aviation Administration), caused interference to several GPS receivers on-board a small single engine aircraft despite being compliant with data filed at the time with the FCC by the manufacturer. NASA (National Aeronautics and Space Administration) and industry tests show that while there is an emission in the 1575 MHz GPS band due to a specific combination of amplifier output impedance and load impedance that induces instability in the power amplifier, these spurious emissions (i.e., not the intentional transmit signal) are similar to those measured on non-intentionally transmitting devices such as, for example, laptop computers. Additional testing on a wide sample of different commercial cell phones did not result in any emission in the 1575 MHz GPS Band above the noise floor of the measurement receiver.
Long-term behaviour and cross-correlation water quality analysis of the River Elbe, Germany.
Lehmann, A; Rode, M
2001-06-01
This study analyses weekly data samples from the river Elbe at Magdeburg between 1984 and 1996 to investigate the changes in metabolism and water quality in the river Elbe since the German reunification in 1990. Modelling water quality variables by autoregressive component models and ARIMA models reveals the improvement of water quality due to the reduction of waste water emissions since 1990. The models are used to determine the long-term and seasonal behaviour of important water quality variables. Organic and heavy metal pollution parameters showed a significant decrease since 1990, however, no significant change of chlorophyll-a as a measure for primary production could be found. A new procedure for testing the significance of a sample correlation coefficient is discussed, which is able to detect spurious sample correlation coefficients without making use of time-consuming prewhitening. The cross-correlation analysis is applied to hydrophysical, biological, and chemical water quality variables of the river Elbe since 1984. Special emphasis is laid on the detection of spurious sample correlation coefficients.
Charge-Dissipative Electrical Cables
NASA Technical Reports Server (NTRS)
Kolasinski, John R.; Wollack, Edward J.
2004-01-01
Electrical cables that dissipate spurious static electric charges, in addition to performing their main functions of conducting signals, have been developed. These cables are intended for use in trapped-ion or ionizing-radiation environments, in which electric charges tend to accumulate within, and on the surfaces of, dielectric layers of cables. If the charging rate exceeds the dissipation rate, charges can accumulate in excessive amounts, giving rise to high-current discharges that can damage electronic circuitry and/or systems connected to it. The basic idea of design and operation of charge-dissipative electrical cables is to drain spurious charges to ground by use of lossy (slightly electrically conductive) dielectric layers, possibly in conjunction with drain wires and/or drain shields (see figure). In typical cases, the drain wires and/or drain shields could be electrically grounded via the connector assemblies at the ends of the cables, in any of the conventional techniques for grounding signal conductors and signal shields. In some cases, signal shields could double as drain shields.
Examining the relationship between religiosity and self-control as predictors of prison deviance.
Kerley, Kent R; Copes, Heith; Tewksbury, Richard; Dabney, Dean A
2011-12-01
The relationship between religiosity and crime has been the subject of much empirical debate and testing over the past 40 years. Some investigators have argued that observed relationships between religion and crime may be spurious because of self-control, arousal, or social control factors. The present study offers the first investigation of religiosity, self-control, and deviant behavior in the prison context. We use survey data from a sample of 208 recently paroled male inmates to test the impact of religiosity and self-control on prison deviance. The results indicate that two of the three measures of religiosity may be spurious predictors of prison deviance after accounting for self-control. Participation in religious services is the only measure of religiosity to significantly reduce the incidence of prison deviance when controlling for demographic factors, criminal history, and self-control. We conclude with implications for future studies of religiosity, self-control, and deviance in the prison context.
Examining the Relationship Between Religiosity and Self-Control as Predictors of Prison Deviance.
Kerley, Kent R; Copes, Heith; Tewksbury, Richard; Dabney, Dean A
2010-11-29
The relationship between religiosity and crime has been the subject of much empirical debate and testing over the past 40 years. Some investigators have argued that observed relationships between religion and crime may be spurious because of self-control, arousal, or social control factors. The present study offers the first investigation of religiosity, self-control, and deviant behavior in the prison context. We use survey data from a sample of 208 recently paroled male inmates to test the impact of religiosity and self-control on prison deviance. The results indicate that two of the three measures of religiosity may be spurious predictors of prison deviance after accounting fovr self-control. Participation in religious services is the only measure of religiosity to significantly reduce the incidence of prison deviance when controlling for demographic factors, criminal history, and self-control. We conclude with implications for future studies of religiosity, self-control, and deviance in the prison context.
Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.
Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D
2012-01-01
Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.
NASA Astrophysics Data System (ADS)
Manning, Ellen M.; Cole, Andrew A.
2017-11-01
We examine the biases inherent to chemical abundance distributions when targets are selected from the red giant branch (RGB), using simulated giant branches created from isochrones. We find that even when stars are chosen from the entire colour range of RGB stars and over a broad range of magnitudes, the relative numbers of stars of different ages and metallicities, integrated over all stellar types, are not accurately represented in the giant branch sample. The result is that metallicity distribution functions derived from RGB star samples require a correction before they can be fitted by chemical evolution models. We derive simple correction factors for over- and under-represented populations for the limiting cases of single-age populations with a broad range of metallicities and of continuous star formation at constant metallicity; an important general conclusion is that intermediate-age populations (≈1-4 Gyr) are over-represented in RGB samples. We apply our models to the case of the Large Magellanic Cloud bar and show that the observed metallicity distribution underestimates the true number of metal-poor stars by more than 25 per cent; as a result, the inferred importance of gas flows in chemical evolution models could potentially be overestimated. The age- and metallicity-dependences of RGB lifetimes require careful modelling if they are not to lead to spurious conclusions about the chemical enrichment history of galaxies.
NASA Astrophysics Data System (ADS)
Abel, Rafael; Boening, Claus
2015-04-01
Current practice in the atmospheric forcing of ocean model simulations can lead to unphysical behaviours. The problem lies in the bulk formulation of the turbulent air-sea fluxes in conjunction with a prescribed, and unresponsive, atmospheric state as given, e.g., by reanalysis products. This forcing formulation corresponds to assuming an atmosphere with infinite heat capacity, and effectively damps SST anomalies even on basin scales. It thus curtails an important negative feedback between meridional ocean heat transport and SST in the North Atlantic, rendering simulations of the AMOC in such models excessively sensitive to details in the freshwater fluxes. As a consequence, such simulations are known for spurious drift behaviors which can only partially controlled by introducing some (and sometimes strong) unphysical restoring of sea surface salinity. There have been several suggestions during the last 20 years for at least partially alleviating the problem by including some simplified model of the atmospheric boundary layer (AML) which allows a feedback of SST anomalies on the near-surface air temperature and humidity needed to calculate the surface fluxes. We here present simulations with a simple, only thermally active AML formulation (based on the 'CheapAML' proposed by Deremble et al., 2013) implemented in a global model configuration based on NEMO (ORCA05). In a suite of experiments building on the CORE-bulk forcing methodology, we examine some general features of the AML-solutions (in which only the winds are prescribed) in comparison to solutions with a prescribed atmosperic state. The focus is on the North Atlantic, where we find that the adaptation of the atmospheric temperature the simulated ocean state can lead to strong local modifications in the surface heat fluxes in frontal regions (e.g., the 'Northwest Corner'). We particularly assess the potential of the AML-forcing concept for obtaining AMOC-simulations with reduced spurious drift, without employing the traditional remedy of salinity restoring.
NASA Astrophysics Data System (ADS)
Harpsøe, K. B. W.; Jørgensen, U. G.; Andersen, M. I.; Grundahl, F.
2012-06-01
Context. The EMCCD is a type of CCD that delivers fast readout times and negligible readout noise, making it an ideal detector for high frame rate applications which improve resolution, like lucky imaging or shift-and-add. This improvement in resolution can potentially improve the photometry of faint stars in extremely crowded fields significantly by alleviating crowding. Alleviating crowding is a prerequisite for observing gravitational microlensing in main sequence stars towards the galactic bulge. However, the photometric stability of this device has not been assessed. The EMCCD has sources of noise not found in conventional CCDs, and new methods for handling these must be developed. Aims: We aim to investigate how the normal photometric reduction steps from conventional CCDs should be adjusted to be applicable to EMCCD data. One complication is that a bias frame cannot be obtained conventionally, as the output from an EMCCD is not normally distributed. Also, the readout process generates spurious charges in any CCD, but in EMCCD data, these charges are visible as opposed to the conventional CCD. Furthermore we aim to eliminate the photon waste associated with lucky imaging by combining this method with shift-and-add. Methods: A simple probabilistic model for the dark output of an EMCCD is developed. Fitting this model with the expectation-maximization algorithm allows us to estimate the bias, readout noise, amplification, and spurious charge rate per pixel and thus correct for these phenomena. To investigate the stability of the photometry, corrected frames of a crowded field are reduced with a point spread function (PSF) fitting photometry package, where a lucky image is used as a reference. Results: We find that it is possible to develop an algorithm that elegantly reduces EMCCD data and produces stable photometry at the 1% level in an extremely crowded field. Based on observation with the Danish 1.54 m telescope at ESO La Silla Observatory.
NASA Astrophysics Data System (ADS)
Du, X.; Landecker, T. L.; Robishaw, T.; Gray, A. D.; Douglas, K. A.; Wolleben, M.
2016-11-01
Measurement of the brightness temperature of extended radio emission demands knowledge of the gain (or aperture efficiency) of the telescope and measurement of the polarized component of the emission requires correction for the conversion of unpolarized emission from sky and ground to apparently polarized signal. Radiation properties of the John A. Galt Telescope at the Dominion Radio Astrophysical Observatory were studied through analysis and measurement in order to provide absolute calibration of a survey of polarized emission from the entire northern sky from 1280 to 1750 MHz, and to understand the polarization performance of the telescope. Electromagnetic simulation packages CST and GRASP-10 were used to compute complete radiation patterns of the telescope in all Stokes parameters, and thereby to establish gain and aperture efficiency. Aperture efficiency was also evaluated using geometrical optics and ray tracing analysis and was measured based on the known flux density of Cyg A. Measured aperture efficiency varied smoothly with frequency between values of 0.49 and 0.54; GRASP-10 yielded values 6.5% higher but with closely similar variation with frequency. Overall error across the frequency band is 3%, but values at any two frequencies are relatively correct to ˜1%. Dominant influences on aperture efficiency are the illumination taper of the feed radiation pattern and the shadowing of the reflector from the feed by the feed-support struts. A model of emission from the ground was developed based on measurements and on empirical data obtained from remote sensing of the Earth from satellite-borne telescopes. This model was convolved with the computed antenna response to estimate conversion of ground emission into spurious polarized signal. The computed spurious signal is comparable to measured values, but is not accurate enough to be used to correct observations. A simpler model, in which the ground is considered as an unpolarized emitter with a brightness temperature of ˜240 K, is shown to have useful accuracy when compared to measurements.
Global Warming Estimation From Microwave Sounding Unit
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Dalu, G.
1998-01-01
Microwave Sounding Unit (MSU) Ch 2 data sets, collected from sequential, polar-orbiting, Sun-synchronous National Oceanic and Atmospheric Administration operational satellites, contain systematic calibration errors that are coupled to the diurnal temperature cycle over the globe. Since these coupled errors in MSU data differ between successive satellites, it is necessary to make compensatory adjustments to these multisatellite data sets in order to determine long-term global temperature change. With the aid of the observations during overlapping periods of successive satellites, we can determine such adjustments and use them to account for the coupled errors in the long-term time series of MSU Ch 2 global temperature. In turn, these adjusted MSU Ch 2 data sets can be used to yield global temperature trend. In a pioneering study, Spencer and Christy (SC) (1990) developed a procedure to derive the global temperature trend from MSU Ch 2 data. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedure, the magnitude of the coupled errors is not determined explicitly. Furthermore, based on some assumptions, these coupled errors are eliminated in three separate steps. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedures. Based on our analysis, we find there is a global warming of 0.23+/-0.12 K between 1980 and 1991. Also, in this study, the time series of global temperature anomalies constructed by removing the global mean annual temperature cycle compares favorably with a similar time series obtained from conventional observations of temperature.
Carbonell, Felix; Bellec, Pierre
2011-01-01
Abstract The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)–based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations. PMID:22444074
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less
Utilizing soil polypedons to improve model performance for digital soil mapping
USDA-ARS?s Scientific Manuscript database
Most digital soil mapping approaches that use point data to develop relationships with covariate data intersect sample locations with one raster pixel regardless of pixel size. Resulting models are subject to spurious values in covariate data which may limit model performance. An alternative approac...
Construct Meaning in Multilevel Settings
ERIC Educational Resources Information Center
Stapleton, Laura M.; Yang, Ji Seung; Hancock, Gregory R.
2016-01-01
We present types of constructs, individual- and cluster-level, and their confirmatory factor analytic validation models when data are from individuals nested within clusters. When a construct is theoretically individual level, spurious construct-irrelevant dependency in the data may appear to signal cluster-level dependency; in such cases,…
Digital Correlation In Laser-Speckle Velocimetry
NASA Technical Reports Server (NTRS)
Gilbert, John A.; Mathys, Donald R.
1992-01-01
Periodic recording helps to eliminate spurious results. Improved digital-correlation process extracts velocity field of two-dimensional flow from laser-speckle images of seed particles distributed sparsely in flow. Method which involves digital correlation of images recorded at unequal intervals, completely automated and has potential to be fastest yet.
NASA Technical Reports Server (NTRS)
Booth, Gary N.; Malinzak, R. Michael
1990-01-01
Treatment similar to dental polishing used to remove microfissures from metal parts without reworking adjacent surfaces. Any variety of abrasive tips attached to small motor used to grind spot treated. Configuration of grinding head must be compatible with configurations of motor and workpiece. Devised to eliminate spurious marks on welded parts.
Microresonator electrode design
Olsson, III, Roy H.; Wojciechowski, Kenneth; Branch, Darren W.
2016-05-10
A microresonator with an input electrode and an output electrode patterned thereon is described. The input electrode includes a series of stubs that are configured to isolate acoustic waves, such that the waves are not reflected into the microresonator. Such design results in reduction of spurious modes corresponding to the microresonator.
Haynes, S E
1983-10-01
It is widely known that linear restrictions involve bias. What is not known is that some linear restrictions are especially dangerous for hypothesis testing. For some, the expected value of the restricted coefficient does not lie between (among) the true unconstrained coefficients, which implies that the estimate is not a simple average of these coefficients. In this paper, the danger is examined regarding the additive linear restriction almost universally imposed in statistical research--the restriction of symmetry. Symmetry implies that the response of the dependent variable to a unit decrease in an expanatory variable is identical, but of opposite sign, to the response to a unit increase. The 1st section of the paper demonstrates theoretically that a coefficient restricted by symmetry (unlike coefficients embodying other additive restrictions) is not a simple average of the unconstrained coefficients because the relevant interacted variables are inversly correlated by definition. The next section shows that, under the restriction of symmetry, fertility in Finland from 1885-1925 appears to respond in a prolonged manner to infant mortality (significant and positive with a lag of 4-6 years), suggesting a response to expected deaths. However, unscontrained estimates indicate that this finding is spurious. When the restriction is relaxed, the dominant response is rapid (significant and positive with a lag of 1-2 years) and stronger for declines in mortality, supporting an aymmetric response to actual deaths. For 2 reasons, the danger of the symmetry restriction may be especially pervasive. 1st, unlike most other linear constraints, symmetry is passively imposed merely by ignoring the possibility of asymmetry. 2nd, modles in a wide range of fields--including macroeconomics (e.g., demand for money, consumption, and investment models, and the Phillips curve), international economics (e.g., intervention models of central banks), and labor economics (e.g., sticky wage models)--predict asymmetry. The conclusion of the study is that, to avoid spurious hypothesis testing, empirical research should systematically test for asymmetry, especially when predicted by theory.
Identification of pathogen genomic variants through an integrated pipeline
2014-01-01
Background Whole-genome sequencing represents a powerful experimental tool for pathogen research. We present methods for the analysis of small eukaryotic genomes, including a streamlined system (called Platypus) for finding single nucleotide and copy number variants as well as recombination events. Results We have validated our pipeline using four sets of Plasmodium falciparum drug resistant data containing 26 clones from 3D7 and Dd2 background strains, identifying an average of 11 single nucleotide variants per clone. We also identify 8 copy number variants with contributions to resistance, and report for the first time that all analyzed amplification events are in tandem. Conclusions The Platypus pipeline provides malaria researchers with a powerful tool to analyze short read sequencing data. It provides an accurate way to detect SNVs using known software packages, and a novel methodology for detection of CNVs, though it does not currently support detection of small indels. We have validated that the pipeline detects known SNVs in a variety of samples while filtering out spurious data. We bundle the methods into a freely available package. PMID:24589256
Lattice calculation of electric dipole moments and form factors of the nucleon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramczyk, M.; Aoki, S.; Blum, T.
In this paper, we analyze commonly used expressions for computing the nucleon electric dipole form factors (EDFF)more » $$F_3$$ and moments (EDM) on a lattice and find that they lead to spurious contributions from the Pauli form factor $$F_2$$ due to inadequate definition of these form factors when parity mixing of lattice nucleon fields is involved. Using chirally symmetric domain wall fermions, we calculate the proton and the neutron EDFF induced by the CP-violating quark chromo-EDM interaction using the corrected expression. In addition, we calculate the electric dipole moment of the neutron using a background electric field that respects time translation invariance and boundary conditions, and we find that it decidedly agrees with the new formula but not the old formula for $$F_3$$. In conclusion, we analyze some selected lattice results for the nucleon EDM and observe that after the correction is applied, they either agree with zero or are substantially reduced in magnitude, thus reconciling their difference from phenomenological estimates of the nucleon EDM.« less
Lattice calculation of electric dipole moments and form factors of the nucleon
Abramczyk, M.; Aoki, S.; Blum, T.; ...
2017-07-10
In this paper, we analyze commonly used expressions for computing the nucleon electric dipole form factors (EDFF)more » $$F_3$$ and moments (EDM) on a lattice and find that they lead to spurious contributions from the Pauli form factor $$F_2$$ due to inadequate definition of these form factors when parity mixing of lattice nucleon fields is involved. Using chirally symmetric domain wall fermions, we calculate the proton and the neutron EDFF induced by the CP-violating quark chromo-EDM interaction using the corrected expression. In addition, we calculate the electric dipole moment of the neutron using a background electric field that respects time translation invariance and boundary conditions, and we find that it decidedly agrees with the new formula but not the old formula for $$F_3$$. In conclusion, we analyze some selected lattice results for the nucleon EDM and observe that after the correction is applied, they either agree with zero or are substantially reduced in magnitude, thus reconciling their difference from phenomenological estimates of the nucleon EDM.« less
Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC
NASA Astrophysics Data System (ADS)
Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.
2017-07-01
The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.
A study of ferromagnetic signals in SrTiO{sub 3} nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovacs, P.; Des Roches, B.; Crandles, D. A.
It has been suggested that ferromagnetism may be a universal feature of nanoparticles related to particle size. We study this claim for the case of commercially produced SrTiO{sub 3} nanoparticles purchased from Alfa-Aesar. Both loosely-packed nanoparticle samples and pellets formed using uniaxial pressure were studied. Both loose and pressed samples were annealed in either air or in vacuum of 5×10{sup −6} Torr at 600, 800 and 1000°C. Then x-ray diffraction and SQUID measurements were made on the resulting samples. It was found that annealed loose powder samples always had a linear diamagnetic magnetization versus field response, while their pressed pelletmore » counterparts exhibit a ferromagnetic hysteresis component in addition to the linear diamagnetic signal. Williamson-Hall analysis reveals that the particle size in pressed pellet samples increases with annealing temperature but does not change significantly in loose powder samples. The main conclusion is that the act of pressing pellets in a die introduces a spurious ferromagnetic signal into SQUID measurements.« less
Children Learn Spurious Associations in Their Math Textbooks: Examples from Fraction Arithmetic
ERIC Educational Resources Information Center
Braithwaite, David W.; Siegler, Robert S.
2018-01-01
Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge--rather than understanding of mathematical concepts and procedures--to…
In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...
A well-posed optimal spectral element approximation for the Stokes problem
NASA Technical Reports Server (NTRS)
Maday, Y.; Patera, A. T.; Ronquist, E. M.
1987-01-01
A method is proposed for the spectral element simulation of incompressible flow. This method constitutes in a well-posed optimal approximation of the steady Stokes problem with no spurious modes in the pressure. The resulting method is analyzed, and numerical results are presented for a model problem.
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... operation. Curves or equivalent data shall be supplied showing the magnitude of each harmonic and other.... For equipment operating on frequencies below 890 MHz, an open field test is normally required, with... either impractical or impossible to make open field measurements (e.g. a broadcast transmitter installed...
In traditional watershed delineation and topographic modelling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In r...
Spurious Latent Classes in the Mixture Rasch Model
ERIC Educational Resources Information Center
Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.
2011-01-01
Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…
Siblings and Gender Differences in African-American College Attendance
ERIC Educational Resources Information Center
Loury, Linda Datcher
2004-01-01
Differences in college enrollment growth rates for African-American men and women have resulted in a large gender gap in college attendance. This paper shows that, controlling for spurious correlation with unobserved variables, having more college-educated older siblings raises rather than lowers the likelihood of college attendance for…
Use of Inappropriate and Inaccurate Conceptual Knowledge to Solve an Osmosis Problem.
ERIC Educational Resources Information Center
Zuckerman, June Trop
1995-01-01
Presents correct solutions to an osmosis problem of two high school science students who relied on inaccurate and inappropriate conceptual knowledge. Identifies characteristics of the problem solvers, salient properties of the problem that could contribute to the problem misrepresentation, and spurious correct answers. (27 references) (Author/MKR)
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... For equipment operating on frequencies below 890 MHz, an open field test is normally required, with... either impractical or impossible to make open field measurements (e.g. a broadcast transmitter installed... 47 Telecommunication 1 2010-10-01 2010-10-01 false Measurements required: Field strength of...
ERIC Educational Resources Information Center
Dodge, Tonya; Jaccard, James
2002-01-01
Compared sexual risk behavior of female athletes and nonathletes. Examined mediation, reverse mediation, spurious effects, and moderated causal models, using as potential mediators physical development, educational aspirations, self-esteem, attitudes toward pregnancy, involvement in a romantic relationship, age, ethnicity, and social class. Found…
Confronting Science: The Dilemma of Genetic Testing.
ERIC Educational Resources Information Center
Zallen, Doris T.
1997-01-01
Considers the opportunities and ethical issues involved in genetic testing. Reviews the history of genetics from the first discoveries of Gregor Mendel, through the spurious pseudo-science of eugenics, and up to the discovery of DNA by James Watson and Francis Crick. Explains how genetic tests are done. (MJP)
Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1996-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Unipolar terminal-attractor based neural associative memory with adaptive threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1993-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Weinberg, C R
1995-01-01
Retrospective assessment of exposure to radon remains the greatest challenge in epidemiologic efforts to assess lung cancer risk associated with residential exposure. An innovative technique based on measurement of alpha-emitting, long-lived daughters embedded by recoil into household glass may one day provide improved radon dosimetry. Particulate air pollution is known, however, to retard the plate-out of radon daughters. This would be expected to result in a differential effect on dosimetry, where the calibration curve relating the actual historical radon exposure to the remaining alpha-activity in the glass would be different in historically smoky and nonsmoky environments. The resulting "measurement confounding" can distort inferences about the effect of radon and can also produce spurious evidence for synergism between radon exposure and cigarette smoking. Images Figure 1. Figure 2. Figure 3. Figure 4. PMID:8605854
Numerical model a graphene component for the sensing of weak electromagnetic signals
NASA Astrophysics Data System (ADS)
Nasswettrova, A.; Fiala, P.; Nešpor, D.; Drexler, P.; Steinbauer, M.
2015-05-01
The paper discusses a numerical model and provides an analysis of a graphene coaxial line suitable for sub-micron sensors of magnetic fields. In relation to the presented concept, the target areas and disciplines include biology, medicine, prosthetics, and microscopic solutions for modern actuators or SMART elements. The proposed numerical model is based on an analysis of a periodic structure with high repeatability, and it exploits a graphene polymer having a basic dimension in nanometers. The model simulates the actual random motion in the structure as the source of spurious signals and considers the pulse propagation along the structure; furthermore, the model also examines whether and how the pulse will be distorted at the beginning of the line, given the various ending versions. The results of the analysis are necessary for further use of the designed sensing devices based on graphene structures.
[Comparative quality measurements part 3: funnel plots].
Kottner, Jan; Lahmann, Nils
2014-02-01
Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.
NASA Astrophysics Data System (ADS)
Almuhammadi, Khaled; Selvakumaran, Lakshmi; Alfano, Marco; Yang, Yang; Bera, Tushar Kanti; Lubineau, Gilles
2015-12-01
Electrical impedance tomography (EIT) is a low-cost, fast and effective structural health monitoring technique that can be used on carbon fiber reinforced polymers (CFRP). Electrodes are a key component of any EIT system and as such they should feature low resistivity as well as high robustness and reproducibility. Surface preparation is required prior to bonding of electrodes. Currently this task is mostly carried out by traditional sanding. However this is a time consuming procedure which can also induce damage to surface fibers and lead to spurious electrode properties. Here we propose an alternative processing technique based on the use of pulsed laser irradiation. The processing parameters that result in selective removal of the electrically insulating resin with minimum surface fiber damage are identified. A quantitative analysis of the electrical contact resistance is presented and the results are compared with those obtained using sanding.
Can Invalid Bioactives Undermine Natural Product-Based Drug Discovery?
2015-01-01
High-throughput biology has contributed a wealth of data on chemicals, including natural products (NPs). Recently, attention was drawn to certain, predominantly synthetic, compounds that are responsible for disproportionate percentages of hits but are false actives. Spurious bioassay interference led to their designation as pan-assay interference compounds (PAINS). NPs lack comparable scrutiny, which this study aims to rectify. Systematic mining of 80+ years of the phytochemistry and biology literature, using the NAPRALERT database, revealed that only 39 compounds represent the NPs most reported by occurrence, activity, and distinct activity. Over 50% are not explained by phenomena known for synthetic libraries, and all had manifold ascribed bioactivities, designating them as invalid metabolic panaceas (IMPs). Cumulative distributions of ∼200,000 NPs uncovered that NP research follows power-law characteristics typical for behavioral phenomena. Projection into occurrence–bioactivity–effort space produces the hyperbolic black hole of NPs, where IMPs populate the high-effort base. PMID:26505758
Infrared dim target detection based on visual attention
NASA Astrophysics Data System (ADS)
Wang, Xin; Lv, Guofang; Xu, Lizhong
2012-11-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. Based on human visual attention mechanisms, an automatic detection algorithm for infrared dim target is presented. After analyzing the characteristics of infrared dim target images, the method firstly designs Difference of Gaussians (DoG) filters to compute the saliency map. Then the salient regions where the potential targets exist in are extracted by searching through the saliency map with a control mechanism of winner-take-all (WTA) competition and inhibition-of-return (IOR). At last, these regions are identified by the characteristics of the dim IR targets, so the true targets are detected, and the spurious objects are rejected. The experiments are performed for some real-life IR images, and the results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Use of Acoustic Emission to Monitor Progressive Damage Accumulation in KEVLAR® 49 Composites
NASA Astrophysics Data System (ADS)
Waller, J. M.; Andrade, E.; Saulsberry, R. L.
2010-02-01
Acoustic emission (AE) data acquired during intermittent load hold tensile testing of epoxy impregnated Kevlar® 49 (K/Ep) composite strands were analyzed to monitor progressive damage during the approach to tensile failure. Insight into the progressive damage of K/Ep strands was gained by monitoring AE event rate and energy. Source location based on energy attenuation and arrival time data was used to discern between significant AE attributable to microstructural damage and spurious AE attributable to noise. One of the significant findings was the observation of increasing violation of the Kaiser effect (Felicity ratio <1.0) with damage accumulation. The efficacy of three different intermittent load hold stress schedules that allowed the Felicity ratio to be determined analytically is discussed.
Spatial recurrence analysis: A sensitive and fast detection tool in digital mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prado, T. L.; Galuzio, P. P.; Lopes, S. R.
Efficient diagnostics of breast cancer requires fast digital mammographic image processing. Many breast lesions, both benign and malignant, are barely visible to the untrained eye and requires accurate and reliable methods of image processing. We propose a new method of digital mammographic image analysis that meets both needs. It uses the concept of spatial recurrence as the basis of a spatial recurrence quantification analysis, which is the spatial extension of the well-known time recurrence analysis. The recurrence-based quantifiers are able to evidence breast lesions in a way as good as the best standard image processing methods available, but with amore » better control over the spurious fragments in the image.« less
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Remote defect imaging for plate-like structures based on the scanning laser source technique
NASA Astrophysics Data System (ADS)
Hayashi, Takahiro; Maeda, Atsuya; Nakao, Shogo
2018-04-01
In defect imaging with a scanning laser source technique, the use of a fixed receiver realizes stable measurements of flexural waves generated by laser at multiple rastering points. This study discussed the defect imaging by remote measurements using a laser Doppler vibrometer as a receiver. Narrow-band burst waves were generated by modulating laser pulse trains of a fiber laser to enhance signal to noise ratio in frequency domain. Averaging three images obtained at three different frequencies suppressed spurious distributions due to resonance. The experimental system equipped with these newly-devised means enabled us to visualize defects and adhesive objects in plate-like structures such as a plate with complex geometries and a branch pipe.
NASA Technical Reports Server (NTRS)
Anders, E.
1978-01-01
Several objections are raised to the contention of Delano and Ringwood (1978) that the siderophiles in the lunar highlands are mainly of indigenous rather than meteoritic origin. It is argued that the rejection of 29 pristine lunar rocks characterized by low siderophilic abundances, plutonic textures and high age on the supposition that they are impact melts is unjustified by petrographic evidence. It is further contended that the approach used by Delano and Ringwood leads to spurious excesses of Au, Ni and volatiles, which disappear when the highland composition is based on pristine lunar rocks rather than undercorrected breccias. Large, systematic depletions relative to terrestrial oceanic tholeiites are revealed by other derivations of abundances in lunar highland materials.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Are the low-lying isovector 1 + states scissors vibrations?
NASA Astrophysics Data System (ADS)
Faessler, A.
At the Technische Hochschule in Darmstadt the group of Richter and coworkers found in 1983/84 in deformed rare earth nuclei low-lying isovector 1 + states. Such states have been predicted in the generalized Bohr-Mottelson model and in the interacting boson model no. 2 (IBA2). In the generalized Bohr-Mottelson model one allows for proton and neutron quadrupole deformations separately. If one includes only static proton and neutron deformations the generalized Bohr-Mottelson model reduces to the two rotor model. It describes the excitation energy of these states in good agreement with the data but overestimates the magnetic dipole transition probabilities by a factor 5. In the interacting boson model (IBA2) where only the outermost nucleons participate in the excitation the magnetic dipole transition probability is only overestimated by a factor 2. The too large collectivity in both models results from the fact that they concentrate the whole strength of the scissors vibrations into one state. A microscopic description is needed to describe the spreading of the scissors strength over several states. For a microscopic determination of these scissors states one uses the Quasi-particle Random Phase Approximation (QRPA). But this approach has a serious difficulty. Since one rotates for the calculation the nucleus into the intrinsic system the state corresponding to the rotation of the whole nucleus is a spurious state. The usual procedure to remove this spuriosity is to use the Thouless theorem which says that a spurious state created by an operator which commutes with the total hamiltonian (here the total angular momentum, corresponding to a rotation of the whole system) produces the spurious state if applied to the ground state. It says further the energy of this spurious state lies at zero excitation energy (it is degenerate with the ground state) and is orthogonal to all physical states. Thus the usual approach is to vary the quadrupole-quadrupole force strength so that a state lies at zero excitation energy and to identify that with the spuríous state. This procedure assumes that a total angular momentum commutes with a total hamiltonian. But this is not the case since the total hamiltonian contains a deformed Saxon-Woods potential. Thus one has to take care explicitly that the spurious state is removed. This we do in our approach by introducing Lagrange multipliers for each excited states and requesting that these states are orthogonal to the spurious state which is explicitly constructed by applying the total angular momentum operator to the ground state. To reduce the number of free parameters in the hamiltonian we take the Saxon-Woods potential for the deformed nuclei from the literature (with minor adjustments) and determine the proton-proton, neutron-neutron and the proton-neutron quadrupole force constant by requesting that the hamiltonian commutes with the total angular momentum in the (QRPA) ground state. This yields equations fixing all three coupling constants for the quadrupole-quadrupole force allowing even for isospin symmetry violation. The spin-spin force is taken from the Reid soft core potential. A possible spin-quadrupole force has been taken from the work of Soloviev but it turns out that this is not important. The calculation shows that the strength of the scissors vibrations are spread over many states. The main 1 + state at around 3 MeV has an overlap of the order of 14 % of the scissors state. 50% of that state are spread over the physical states up to an excitation energy of 6 MeV. The rest is distributed over higher lying states. The expectation value of the many-body hamiltonian in the scissors vibrational state shows roughly an excitation energy of 7 MeV above the ground state. The results also support the experimental findings that these states are mainly orbital excitations. States are not very collective. Normally only a proton and neutron particle-hole pair are with a large amplitude participating in forming these states. But those protons and neutrons which are excited perform scissors type vibrations.
Optimizing the specificity of nucleic acid hybridization
Zhang, David Yu; Chen, Sherry Xi; Yin, Peng
2014-01-01
The specific hybridization of complementary sequences is an essential property of nucleic acids, enabling diverse biological and biotechnological reactions and functions. However, the specificity of nucleic acid hybridization is compromised for long strands, except near the melting temperature. Here, we analytically derived the thermodynamic properties of a hybridization probe that would enable near-optimal single-base discrimination and perform robustly across diverse temperature, salt and concentration conditions. We rationally designed ‘toehold exchange’ probes that approximate these properties, and comprehensively tested them against five different DNA targets and 55 spurious analogues with energetically representative single-base changes (replacements, deletions and insertions). These probes produced discrimination factors between 3 and 100+ (median, 26). Without retuning, our probes function robustly from 10 °C to 37 °C, from 1 mM Mg2+ to 47 mM Mg2+, and with nucleic acid concentrations from 1 nM to 5 μM. Experiments with RNA also showed effective single-base change discrimination. PMID:22354435
Enthalpy-Based Thermal Evolution of Loops: III. Comparison of Zero-Dimensional Models
NASA Technical Reports Server (NTRS)
Cargill, P. J.; Bradshaw, Stephen J.; Klimchuk, James A.
2012-01-01
Zero dimensional (0D) hydrodynamic models, provide a simple and quick way to study the thermal evolution of coronal loops subjected to time-dependent heating. This paper presents a comparison of a number of 0D models that have been published in the past and is intended to provide a guide for those interested in either using the old models or developing new ones. The principal difference between the models is the way the exchange of mass and energy between corona, transition region and chromosphere is treated, as plasma cycles into and out of a loop during a heating-cooling cycle. It is shown that models based on the principles of mass and energy conservation can give satisfactory results at some, or, in the case of the Enthalpy Based Thermal Evolution of Loops (EBTEL) model, all stages of the loop evolution. Empirical models can lead to low coronal densities, spurious delays between the peak density and temperature, and, for short heating pulses, overly short loop lifetimes.
Skeleton-based tracing of curved fibers from 3D X-ray microtomographic imaging
NASA Astrophysics Data System (ADS)
Huang, Xiang; Wen, Donghui; Zhao, Yanwei; Wang, Qinghui; Zhou, Wei; Deng, Daxiang
A skeleton-based fiber tracing algorithm is described and applied on a specific fibrous material, porous metal fiber sintered sheet (PMFSS), featuring high porosity and curved fibers. The skeleton segments are firstly categorized according to the connectivity of the skeleton paths. Spurious segments like fiber bonds are detected making extensive use of the distance transform (DT) values. Single fibers are then traced and reconstructed by consecutively choosing the connecting skeleton segment pairs that show the most similar orientations and radius. Moreover, to reduce the misconnection due to the tracing orders, a multilevel tracing strategy is proposed. The fibrous network is finally reconstructed by dilating single fibers according to the DT values. Based on the traced single fibers, various morphology information regarding fiber length, radius, orientation, and tortuosity are quantitatively analyzed and compared with our previous results (Wang et al., 2013). Moreover, the number of bonds per fibers are firstly accessed. The methodology described in this paper can be expanded to other fibrous materials with adapted parameters.
Alternative dimensional reduction via the density matrix
NASA Astrophysics Data System (ADS)
de Carvalho, C. A.; Cornwall, J. M.; da Silva, A. J.
2001-07-01
We give graphical rules, based on earlier work for the functional Schrödinger equation, for constructing the density matrix for scalar and gauge fields in equilibrium at finite temperature T. More useful is a dimensionally reduced effective action (DREA) constructed from the density matrix by further functional integration over the arguments of the density matrix coupled to a source. The DREA is an effective action in one less dimension which may be computed order by order in perturbation theory or by dressed-loop expansions; it encodes all thermal matrix elements. We term the DREA procedure alternative dimensional reduction, to distinguish it from the conventional dimensionally reduced field theory (DRFT) which applies at infinite T. The DREA is useful because it gives a dimensionally reduced theory usable at any T including infinity, where it yields the DRFT, and because it does not and cannot have certain spurious infinities which sometimes occur in the density matrix itself or the conventional DRFT; these come from ln T factors at infinite temperature. The DREA can be constructed to all orders (in principle) and the only regularizations needed are those which control the ultraviolet behavior of the zero-T theory. An example of spurious divergences in the DRFT occurs in d=2+1φ4 theory dimensionally reduced to d=2. We study this theory and show that the rules for the DREA replace these ``wrong'' divergences in physical parameters by calculable powers of ln T; we also compute the phase transition temperature of this φ4 theory in one-loop order. Our density-matrix construction is equivalent to a construction of the Landau-Ginzburg ``coarse-grained free energy'' from a microscopic Hamiltonian.
DS Sentry: an acquisition ASIC for smart, micro-power sensing applications
NASA Astrophysics Data System (ADS)
Liobe, John; Fiscella, Mark; Moule, Eric; Balon, Mark; Bocko, Mark; Ignjatovic, Zeljko
2011-06-01
Unattended ground monitoring that combines seismic and acoustic information can be a highly valuable tool in intelligence gathering; however there are several prerequisites for this approach to be viable. The first is high sensitivity as well as the ability to discriminate real threats from noise and other spurious signals. By combining ground sensing with acoustic and image monitoring this requirement may be achieved. Moreover, the DS Sentry®provides innate spurious signal rejection by the "active-filtering" technique employed as well as embedding some basic statistical analysis. Another primary requirement is spatial and temporal coverage. The ideal is uninterrupted, long-term monitoring of an area. Therefore, sensors should be densely deployed and consume very little power. Furthermore, sensors must be inexpensive and easily deployed to allow dense placements in critical areas. The ADVIS DS Sentry®, which is a fully-custom integrated circuit that enables smart, micro-power monitoring of dynamic signals, is the foundation of the proposed system. The core premise behind this technology is the use of an ultra-low power front-end for active monitoring of dynamic signals in conjunction with a highresolution, Σ Δ-based analog-to-digital converter, which utilizes a novel noise rejection technique and is only employed when a potential threat has been detected. The DS Sentry® can be integrated with seismic accelerometers and microphones and user-programmed to continuously monitor for signals with specific signatures such as impacts, footsteps, excavation noise, vehicle-induced ground vibrations, or speech, while consuming only microwatts of power. This will enable up to several years of continuous monitoring on a single small battery while concurrently mitigating false threats.
Tong, Jonathan; Mao, Oliver; Goldreich, Daniel
2013-01-01
Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal) of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants’ psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological assessment. PMID:24062677
Evaluating Heterogeneous Conservation Effects of Forest Protection in Indonesia
Shah, Payal; Baylis, Kathy
2015-01-01
Establishing legal protection for forest areas is the most common policy used to limit forest loss. This article evaluates the effectiveness of seven Indonesian forest protected areas introduced between 1999 and 2012. Specifically, we explore how the effectiveness of these parks varies over space. Protected areas have mixed success in preserving forest, and it is important for conservationists to understand where they work and where they do not. Observed differences in the estimated treatment effect of protection may be driven by several factors. Indonesia is particularly diverse, with the landscape, forest and forest threats varying greatly from region to region, and this diversity may drive differences in the effectiveness of protected areas in conserving forest. However, the observed variation may also be spurious and arise from differing degrees of bias in the estimated treatment effect over space. In this paper, we use a difference-in-differences approach comparing treated observations and matched controls to estimate the effect of each protected area. We then distinguish the true variation in protected area effectiveness from spurious variation driven by several sources of estimation bias. Based on our most flexible method that allows the data generating process to vary across space, we find that the national average effect of protection preserves an additional 1.1% of forest cover; however the effect of individual parks range from a decrease of 3.4% to an increase of 5.3% and the effect of most parks differ from the national average. Potential biases may affect estimates in two parks, but results consistently show Sebangau National Park is more effective while two parks are substantially less able to protect forest cover than the national average. PMID:26039754
An efficient link prediction index for complex military organization
NASA Astrophysics Data System (ADS)
Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing
2017-03-01
Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.
Evaluating heterogeneous conservation effects of forest protection in Indonesia.
Shah, Payal; Baylis, Kathy
2015-01-01
Establishing legal protection for forest areas is the most common policy used to limit forest loss. This article evaluates the effectiveness of seven Indonesian forest protected areas introduced between 1999 and 2012. Specifically, we explore how the effectiveness of these parks varies over space. Protected areas have mixed success in preserving forest, and it is important for conservationists to understand where they work and where they do not. Observed differences in the estimated treatment effect of protection may be driven by several factors. Indonesia is particularly diverse, with the landscape, forest and forest threats varying greatly from region to region, and this diversity may drive differences in the effectiveness of protected areas in conserving forest. However, the observed variation may also be spurious and arise from differing degrees of bias in the estimated treatment effect over space. In this paper, we use a difference-in-differences approach comparing treated observations and matched controls to estimate the effect of each protected area. We then distinguish the true variation in protected area effectiveness from spurious variation driven by several sources of estimation bias. Based on our most flexible method that allows the data generating process to vary across space, we find that the national average effect of protection preserves an additional 1.1% of forest cover; however the effect of individual parks range from a decrease of 3.4% to an increase of 5.3% and the effect of most parks differ from the national average. Potential biases may affect estimates in two parks, but results consistently show Sebangau National Park is more effective while two parks are substantially less able to protect forest cover than the national average.
NASA Astrophysics Data System (ADS)
Paredes Mellone, O. A.; Bianco, L. M.; Ceppi, S. A.; Goncalves Honnicke, M.; Stutz, G. E.
2018-06-01
A study of the background radiation in inelastic X-ray scattering (IXS) and X-ray emission spectroscopy (XES) based on an analytical model is presented. The calculation model considers spurious radiation originated from elastic and inelastic scattering processes along the beam paths of a Johann-type spectrometer. The dependence of the background radiation intensity on the medium of the beam paths (air and helium), analysed energy and radius of the Rowland circle was studied. The present study shows that both for IXS and XES experiments the background radiation is dominated by spurious radiation owing to scattering processes along the sample-analyser beam path. For IXS experiments the spectral distribution of the main component of the background radiation shows a weak linear dependence on the energy for the most cases. In the case of XES, a strong non-linear behaviour of the background radiation intensity was predicted for energy analysis very close to the backdiffraction condition, with a rapid increase in intensity as the analyser Bragg angle approaches π / 2. The contribution of the analyser-detector beam path is significantly weaker and resembles the spectral distribution of the measured spectra. Present results show that for usual experimental conditions no appreciable structures are introduced by the background radiation into the measured spectra, both in IXS and XES experiments. The usefulness of properly calculating the background profile is demonstrated in a background subtraction procedure for a real experimental situation. The calculation model was able to simulate with high accuracy the energy dependence of the background radiation intensity measured in a particular XES experiment with air beam paths.
A spurious warming trend in the NMME equatorial Pacific SST hindcasts
NASA Astrophysics Data System (ADS)
Shin, Chul-Su; Huang, Bohua
2017-06-01
Using seasonal hindcasts of six different models participating in the North American Multimodel Ensemble project, the trend of the predicted sea surface temperature (SST) in the tropical Pacific for 1982-2014 at each lead month and its temporal evolution with respect to the lead month are investigated for all individual models. Since the coupled models are initialized with the observed ocean, atmosphere, land states from observation-based reanalysis, some of them using their own data assimilation process, one would expect that the observed SST trend is reasonably well captured in their seasonal predictions. However, although the observed SST features a weak-cooling trend for the 33-year period with La Niña-like spatial pattern in the tropical central-eastern Pacific all year round, it is demonstrated that all models having a time-dependent realistic concentration of greenhouse gases (GHG) display a warming trend in the equatorial Pacific that amplifies as the lead-time increases. In addition, these models' behaviors are nearly independent of the starting month of the hindcasts although the growth rates of the trend vary with the lead month. This key characteristic of the forecasted SST trend in the equatorial Pacific is also identified in the NCAR CCSM3 hindcasts that have the GHG concentration for a fixed year. This suggests that a global warming forcing may not play a significant role in generating the spurious warming trend of the coupled models' SST hindcasts in the tropical Pacific. This model SST trend in the tropical central-eastern Pacific, which is opposite to the observed one, causes a developing El Niño-like warming bias in the forecasted SST with its peak in boreal winter. Its implications for seasonal prediction are discussed.
ERIC Educational Resources Information Center
Gartrell, John; Marquez, Stephanie Amadeo
1995-01-01
Criticizes data analysis and interpretation in "The Bell Curve:" Herrnstein and Murray do not actually study the "cognitive elite"; do not control for education when examining effects of cognitive ability on occupational outcomes, ignore, cultural diversity within broad ethnic groups (Asian Americans, Latinos), ignore gender…
Infant Learning Is Influenced by Local Spurious Generalizations
ERIC Educational Resources Information Center
Gerken, LouAnn; Quam, Carolyn
2017-01-01
In previous work, 11-month-old infants were able to learn rules about the relation of the consonants in CVCV words from just four examples. The rules involved phonetic feature relations (same voicing or same place of articulation), and infants' learning was impeded when pairs of words allowed alternative possible generalizations (e.g. two words…
Republication of "A Simple--But Powerful--Power Simulation"
ERIC Educational Resources Information Center
Bolman, Lee; Deal, Terrence E.
2017-01-01
The authors write that the longer they study and work in organizations, the more they discover power to be one of the central issues which researchers and students must understand. Researchers who ignore power run the risk of spurious, irrelevant findings. Students who assume administrative positions without a proper understanding of power and how…
Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.
ERIC Educational Resources Information Center
Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman
1998-01-01
Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…
ERIC Educational Resources Information Center
Bauer, Daniel J.; Curran, Patrick J.
2004-01-01
Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…
A patient with serum creatinine of 61 mg/dl
Sriram, S.; Srinivas, S.; Naveen, P. S. R.
2017-01-01
Spurious elevation of serum creatinine by Jaffe assay is known to occur due to a variety of substances. This results in subjecting the patient to invasive and complicated procedures such as dialysis. We report a rare case of false elevation of this renal parameter following exposure to an organic solvent. PMID:28182048
The Seven Deadly Sins of World University Ranking: A Summary from Several Papers
ERIC Educational Resources Information Center
Soh, Kaycheng
2017-01-01
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.
ERIC Educational Resources Information Center
O'Reilly, James E.
1986-01-01
Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)
Minimizing bias in biomass allometry: Model selection and log transformation of data
Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer
2011-01-01
Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....
Channel One Online: Advertising Not Educating.
ERIC Educational Resources Information Center
Pasnik, Shelley
Rather than viewing Channel One's World Wide Web site as an authentic news bureau, as the organization claims, it is better understood as an advertising delivery system. The web site is an attempt to expand Channel One's reach into schools, taking advantage of unsuspecting teachers and students who might fall prey to spurious claims. This paper…
High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves
NASA Technical Reports Server (NTRS)
Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.
2012-01-01
In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.
The Influence of Being under the Influence: Alcohol Effects on Adolescent Violence
ERIC Educational Resources Information Center
Felson, Richard B.; Teasdale, Brent; Burchfield, Keri B.
2008-01-01
The authors examine the relationship between intoxication, chronic alcohol use, and violent behavior using data from the National Longitudinal Study of Adolescent Health. The authors introduce a method for disentangling spuriousness from the causal effects of situational variables. Their results suggest that drinkers are much more likely to commit…
Retaining through Training Even for Older Workers
ERIC Educational Resources Information Center
Picchio, Matteo; van Ours, Jan C.
2013-01-01
This paper investigates whether on-the-job training has an effect on the employability of workers. Using data from the Netherlands we disentangle the true effect of training incidence from the spurious one determined by unobserved individual heterogeneity. We also take into account that there might be feedback from shocks in the employment status…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creutz, Michael
Using the Sigma model to explore the lowest order pseudo-scalar spectrum with SU(3) breaking, this talk considers an additional exact "taste" symmetry to mimic species doubling. Rooting replicas of a valid approach such as Wilson fermions reproduces the desired physical spectrum. In contrast, extra symmetries of the rooted staggered approach leave spurious states and a flavor dependent taste multiplicity.
ERIC Educational Resources Information Center
Porter, Kristin E.
2018-01-01
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
J.E. Jakes; C.R. Frihart; J.F. Beecher; R.J. Moon; P.J. Resto; Z.H. Melgarejo; O.M. Saurez; H. Baumgart; A.A. Elmustafa; D.S. Stone
2009-01-01
Whenever a nanoindent is placed near an edge, such as the free edge of the specimen or heterophase interface intersecting the surface, the elastic discontinuity associated with the edge produces artifacts in the load-depth data. Unless properly handled in the data analysis, the artifacts can produce spurious results that obscure any real trends in properties as...
`Unlearning' has a stabilizing effect in collective memories
NASA Astrophysics Data System (ADS)
Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.
1983-07-01
Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.
Xiang, Baoqiang; Zhao, Ming; Held, Isaac M.; ...
2017-02-13
The severity of the double Intertropical Convergence Zone (DI) problem in climate models can be measured by a tropical precipitation asymmetry index (PAI), indicating whether tropical precipitation favors the Northern Hemisphere or the Southern Hemisphere. Examination of 19 Coupled Model Intercomparison Project phase 5 models reveals that the PAI is tightly linked to the tropical sea surface temperature (SST) bias. As one of the factors determining the SST bias, the asymmetry of tropical net surface heat flux in Atmospheric Model Intercomparison Project (AMIP) simulations is identified as a skillful predictor of the PAI change from an AMIP to a coupledmore » simulation, with an intermodel correlation of 0.90. Using tropical top-of-atmosphere (TOA) fluxes, the correlations are lower but still strong. However, the extratropical asymmetries of surface and TOA fluxes in AMIP simulations cannot serve as useful predictors of the PAI change. Furthermore, this study suggests that the largest source of the DI bias is from the tropics and from atmospheric models.« less
Xu, D Z; Deitch, E A; Sittig, K; Qi, L; McDonald, J C
1988-01-01
Mononuclear cells isolated by density gradient centrifugation from the peripheral blood of burn patients, but not healthy volunteers, are contaminated with large numbers of nonmononuclear cells. These contaminating leukocytes could cause artifactual alterations in standard in vitro tests of lymphocyte function. Thus, we compared the in vitro blastogenic response of density gradient purified leukocytes and T-cell purified lymphocytes from 13 burn patients to mitogenic (PHA) and antigenic stimuli. The mitogenic and antigenic response of the patients' density gradient purified leukocytes were impaired compared to healthy volunteers (p less than 0.01). However, when the contaminating nonlymphocytes were removed, the patients' cells responded normally to both stimuli. Thus, density gradient purified mononuclear cells from burn patients are contaminated by leukocytes that are not phenotypically or functionally lymphocytes. Since the lymphocytes from burn patients respond normally to PHA and alloantigens after the contaminating nonlymphocyte cell population has been removed, it appears that in vitro assays of lymphocyte function using density gradient purified leukocytes may give spurious results. PMID:2973771
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Deyu
A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1984-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathode of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the Earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1986-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathodes of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants
NASA Astrophysics Data System (ADS)
Bernhoff, Niclas
2018-05-01
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.
To cut or not to cut? Assessing the modular structure of brain networks.
Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M
2014-05-01
A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.
Structator: fast index-based search for RNA sequence-structure patterns
2011-01-01
Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640
2013-01-01
Background Low birth weight is associated with an increased adult metabolic disease risk. It is widely discussed that poor intra-uterine conditions could induce long-lasting epigenetic modifications, leading to systemic changes in regulation of metabolic genes. To address this, we acquire genome-wide DNA methylation profiles from saliva DNA in a unique cohort of 17 monozygotic monochorionic female twins very discordant for birth weight. We examine if adverse prenatal growth conditions experienced by the smaller co-twins lead to long-lasting DNA methylation changes. Results Overall, co-twins show very similar genome-wide DNA methylation profiles. Since observed differences are almost exclusively caused by variable cellular composition, an original marker-based adjustment strategy was developed to eliminate such variation at affected CpGs. Among adjusted and unchanged CpGs 3,153 are differentially methylated between the heavy and light co-twins at nominal significance, of which 45 show sensible absolute mean β-value differences. Deep bisulfite sequencing of eight such loci reveals that differences remain in the range of technical variation, arguing against a reproducible biological effect. Analysis of methylation in repetitive elements using methylation-dependent primer extension assays also indicates no significant intra-pair differences. Conclusions Severe intra-uterine growth differences observed within these monozygotic twins are not associated with long-lasting DNA methylation differences in cells composing saliva, detectable with up-to-date technologies. Additionally, our results indicate that uneven cell type composition can lead to spurious results and should be addressed in epigenomic studies. PMID:23706164
NASA Astrophysics Data System (ADS)
Zhao, T.; Wang, J.; Dai, A.
2015-12-01
Many multi-decadal atmospheric reanalysis products are avialable now, but their consistencies and reliability are far from perfect. In this study, atmospheric precipitable water (PW) from the NCEP/NCAR, NCEP/DOE, MERRA, JRA-55, JRA-25, ERA-Interim, ERA-40, CFSR and 20CR reanalyses is evaluated against homogenized radiosonde observations over China during 1979-2012 (1979-2001 for ERA-40). Results suggest that the PW biases in the reanalyses are within ˜20% for most of northern and eastern China, but the reanalyses underestimate the observed PW by 20%-40% over western China, and by ˜60% over the southwestern Tibetan Plateau. The newer-generation reanalyses (e.g., JRA25, JRA55, CFSR and ERA-Interim) have smaller root-mean-square error (RMSE) than the older-generation ones (NCEP/NCAR, NCEP/DOE and ERA-40). Most of the reanalyses reproduce well the observed PW climatology and interannual variations over China. However, few reanalyses capture the observed long-term PW changes, primarily because they show spurious wet biases before about 2002. This deficiency results mainly from the discontinuities contained in reanalysis RH fields in the mid-lower troposphere due to the wet bias in older radiosonde records that are assimilated into the reanalyses. An empirical orthogonal function (EOF) analysis revealed two leading modes that represent the long-term PW changes and ENSO-related interannual variations with robust spatial patterns. The reanalysis products, especially the MERRA and JRA-25, roughly capture these EOF modes, which account for over 50% of the total variance. The results show that even during the post-1979 satellite era, discontinuities in radiosonde data can still induce large spurious long-term changes in reanalysis PW and other related fields. Thus, more efforts are needed to remove spurious changes in input data for future long-term reanlayses.
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Atmospheric Dispersion Effects in Weak Lensing Measurements
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells
Trimper, John B.; Trettel, Sean G.; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal “place cells,” neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These “replay” events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay. PMID:28824388
Link prediction in the network of global virtual water trade
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca
2016-04-01
Through the international food-trade, water resources are 'virtually' transferred from the country of production to the country of consumption. The international food-trade, thus, implies a network of virtual water flows from exporting to importing countries (i.e., nodes). Given the dynamical behavior of the network, where food-trade relations (i.e., links) are created and dismissed every year, link prediction becomes a challenge. In this study, we propose a novel methodology for link prediction in the virtual water network. The model aims at identifying the main factors (among 17 different variables) driving the creation of a food-trade relation between any two countries, along the period between 1986 and 2011. Furthermore, the model can be exploited to investigate the network configuration in the future, under different possible (climatic and demographic) scenarios. The model grounds the existence of a link between any two nodes on the link weight (i.e., the virtual water flow): a link exists when the nodes exchange a minimum (fixed) volume of virtual water. Starting from a set of potential links between any two nodes, we fit the associated virtual water flows (both the real and the null ones) by means of multivariate linear regressions. Then, links with estimated flows higher than a minimum value (i.e., threshold) are considered active-links, while the others are non-active ones. The discrimination between active and non-active links through the threshold introduces an error (called link-prediction error) because some real links are lost (i.e., missed links) and some non-existing links (i.e., spurious links) are inevitably introduced in the network. The major drivers are those significantly minimizing the link-prediction error. Once the structure of the unweighted virtual water network is known, we apply, again, linear regressions to assess the major factors driving the fluxes traded along (modelled) active-links. Results indicate that, on the one hand, population and fertilizer use, together with link properties (such as the distance between nodes), are the major factors driving the links creation; on the other hand, population, distance, and gross domestic product are essential to model the flux entity. The results are promising since the model is able to correctly predict the 85% of the 16422 food-trade links (15% are missed), by spuriously adding to the real network only the 5% of non-existing links. The link-prediction error, evaluated as the sum of the percentage of missed and spurious links, is around 20% and it is constant over the study period. Only the 0.01% of the global virtual water flow is traded along missed links and an even lower flow is added by the spurious links (0.003%).
Identifying synonymy between relational phrases using word embeddings.
Nguyen, Nhung T H; Miwa, Makoto; Tsuruoka, Yoshimasa; Tojo, Satoshi
2015-08-01
Many text mining applications in the biomedical domain benefit from automatic clustering of relational phrases into synonymous groups, since it alleviates the problem of spurious mismatches caused by the diversity of natural language expressions. Most of the previous work that has addressed this task of synonymy resolution uses similarity metrics between relational phrases based on textual strings or dependency paths, which, for the most part, ignore the context around the relations. To overcome this shortcoming, we employ a word embedding technique to encode relational phrases. We then apply the k-means algorithm on top of the distributional representations to cluster the phrases. Our experimental results show that this approach outperforms state-of-the-art statistical models including latent Dirichlet allocation and Markov logic networks. Copyright © 2015 Elsevier Inc. All rights reserved.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
NASA Technical Reports Server (NTRS)
Barbely, Natasha L.; Sim, Ben W.; Kitaplioglu, Cahit; Goulding, Pat, II
2010-01-01
Difficulties in obtaining full-scale rotor low frequency noise measurements in wind tunnels are addressed via residual sound reflections due to non-ideal anechoic wall treatments. Examples illustrated with the Boeing-SMART rotor test in the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Foot Wind Tunnel facility demonstrated that these reflections introduced distortions in the measured acoustic time histories that are not representative of free-field rotor noise radiation. A simplified reflection analysis, based on the method of images, is used to examine the sound measurement quality in such "less-than-anechoic" environment. Predictions of reflection-adjusted acoustic time histories are qualitatively shown to account for some of the spurious fluctuations observed in wind tunnel noise measurements
Use of Acoustic Emission to Monitor Progressive Damage Accumulation in Kevlar (R) 49 Composites
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Saulsberry, Regor L.; Andrade, Eduardo
2009-01-01
Acoustic emission (AE) data acquired during intermittent load hold tensile testing of epoxy impregnated Kevlar(Registeres TradeMark) 49 (K/Ep) composite strands were analyzed to monitor progressive damage during the approach to tensile failure. Insight into the progressive damage of K/Ep strands was gained by monitoring AE event rate and energy. Source location based on energy attenuation and arrival time data was used to discern between significant AE attributable to microstructural damage and spurious AE attributable to noise. One of the significant findings was the observation of increasing violation of the Kaiser effect (Felicity ratio < 1.0) with damage accumulation. The efficacy of three different intermittent load hold stress schedules that allowed the Felicity ratio to be determined analytically is discussed.
Numerical investigation of shock induced bubble collapse in water
NASA Astrophysics Data System (ADS)
Apazidis, N.
2016-04-01
A semi-conservative, stable, interphase-capturing numerical scheme for shock propagation in heterogeneous systems is applied to the problem of shock propagation in liquid-gas systems. The scheme is based on the volume-fraction formulation of the equations of motion for liquid and gas phases with separate equations of state. The semi-conservative formulation of the governing equations ensures the absence of spurious pressure oscillations at the material interphases between liquid and gas. Interaction of a planar shock in water with a single spherical bubble as well as twin adjacent bubbles is investigated. Several stages of the interaction process are considered, including focusing of the transmitted shock within the deformed bubble, creation of a water-hammer shock as well as generation of high-speed liquid jet in the later stages of the process.
Improved Electromechanical Infrared Sensor
NASA Technical Reports Server (NTRS)
Kenny, Thomas W.; Kaiser, William J.
1994-01-01
Proposed electromechanical infrared detector improved version of device described in "Micromachined Electron-Tunneling Infrared Detectors" (NPO-18413). Fabrication easier, and undesired sensitivity to acceleration reduced. In devices, diaphragms and other components made of micromachined silicon, and displacements of diaphragms measured by electron tunneling displacement transducer {see "Micromachined Tunneling Accelerometer" (NPO-18513)}. Improved version offers enhanced frequency response and less spurious response to acceleration.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
Mountainous Coasts: A change to the GFS post codes will remove a persistent, spurious high pressure system ENVIRONMENTAL PREDICTION /NCEP/ WILL UPGRADE THE GFS POST PROCESSOR. THE PRIMARY EFFORT BEHIND THIS UPGRADE WILL BE TO UNIFY THE POST PROCESSING CODE FOR THE NORTH AMERICAN MESO SCALE /NAM/ MODEL AND THE GFS INTO
Curious or spurious correlations within a national-scale forest inventory?
Christopher W. Woodall; James A. Westfall
2012-01-01
Foresters are increasingly required to assess trends not only in traditional forest attributes (e.g., growing-stock volumes), but also across suites of forest health indicators and site/climate variables. Given the tenuous relationship between correlation and causality within extremely large datasets, the goal of this study was to use a nationwide annual forest...
Using the Graded Response Model to Control Spurious Interactions in Moderated Multiple Regression
ERIC Educational Resources Information Center
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W.
2012-01-01
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
ERIC Educational Resources Information Center
Macizo, Pedro; Van Petten, Cyma; O'Rourke, Polly L.
2012-01-01
Many multisyllabic words contain shorter words that are not semantic units, like the CAP in HANDICAP and the DURA ("hard") in VERDURA ("vegetable"). The spaces between printed words identify word boundaries, but spurious identification of these embedded words is a potentially greater challenge for spoken language comprehension, a challenge that is…
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
NASA Technical Reports Server (NTRS)
Halverson, Peter G.; Loya, Frank M.
2004-01-01
This paper describes heterodyne displacement metrology gauge signal processing methods that achieve satisfactory robustness against low signal strength and spurious signals, and good long-term stability. We have a proven displacement-measuring approach that is useful not only to space-optical projects at JPL, but also to the wider field of distance measurements.
ERIC Educational Resources Information Center
Monden, Christiaan W. S.
2010-01-01
The association between educational attainment and self-assessed health is well established but the mechanisms that explain this association are not fully understood yet. It is likely that part of the association is spurious because (genetic and non-genetic) characteristics of a person's family of origin simultaneously affect one's educational…
Mirror, Mirror on the Wall: A Closer Look at the Top Ten in University Rankings
ERIC Educational Resources Information Center
Cheng, Soh Kay
2011-01-01
Notwithstanding criticisms and discussions on methodological grounds, much attention has been and still will be paid to university rankings for various reasons. The present paper uses published information of the 10 top-ranking universities of the world and demonstrates the problem of spurious precision. In view of the problem of measurement error…
ERIC Educational Resources Information Center
Soh, Kaycheng
2014-01-01
In PISA 2009, Finland and Singapore were both ranked high among the participating nations and have caught much attention internationally. However, a secondary analysis of the means for Reading achievement show that the differences are rather small and are attributable to spurious precision. Hence, the two nations should be considered as being on…
High-Temperature Hall-Effect Apparatus
NASA Technical Reports Server (NTRS)
Wood, C.; Lockwood, R. A.; Chemielewski, A. B.; Parker, J. B.; Zoltan, A.
1985-01-01
Compact furnace minimizes thermal gradients and electrical noise. Semiautomatic Hall-effect apparatus takes measurements on refractory semiconductors at temperatures as high as 1,100 degrees C. Intended especially for use with samples of high conductivity and low chargecarrier mobility that exhibit low signal-to-noise ratios, apparatus carefully constructed to avoid spurious electromagnetic and thermoelectric effects that further degrade measurements.
Temperature Dependence Of Single-Event Effects
NASA Technical Reports Server (NTRS)
Coss, James R.; Nichols, Donald K.; Smith, Lawrence S.; Huebner, Mark A.; Soli, George A.
1990-01-01
Report describes experimental study of effects of temperature on vulnerability of integrated-circuit memories and other electronic logic devices to single-event effects - spurious bit flips or latch-up in logic state caused by impacts of energetic ions. Involved analysis of data on 14 different device types. In most cases examined, vulnerability to these effects increased or remain constant with temperature.
ERIC Educational Resources Information Center
Law, Dennis C. S.; Meyer, Jan H. F.
2011-01-01
The present study aims to analyse the complex relationships between the relevant constructs of students' demographic background, perceptions, learning patterns and (proxy measures of) learning outcomes in order to delineate the possible direct, indirect, or spurious effects among them. The analytical methodology is substantively framed against the…
How large is the Upper Indus Basin? The pitfalls of auto-delineation using DEMs
NASA Astrophysics Data System (ADS)
Khan, Asif; Richards, Keith S.; Parker, Geoffrey T.; McRobie, Allan; Mukhopadhyay, Biswajit
2014-02-01
Extraction of watershed areas from Digital Elevation Models (DEMs) is increasingly required in a variety of environmental analyses. It is facilitated by the availability of DEMs based on remotely sensed data, and by Geographical Information System (GIS) software. However, accurate delineation depends on the quality of the DEM and the methodology adopted. This paper considers automated and supervised delineation in a case study of the Upper Indus Basin (UIB), Pakistan, for which published estimates of the basin area show significant disagreement, ranging from 166,000 to 266,000 km2. Automated delineation used ArcGIS Archydro and hydrology tools applied to three good quality DEMs (two from SRTM data with 90m resolution, and one from 30m resolution ASTER data). Automatic delineation defined a basin area of c.440,000 km2 for the UIB, but included a large area of internal drainage in the western Tibetan Plateau. It is shown that discrepancies between different estimates reflect differences in the initial extent of the DEM used for watershed delineation, and the unchecked effect of iterative pit-filling of the DEM (going beyond the filling of erroneous pixels to filling entire closed basins). For the UIB we have identified critical points where spurious addition of catchment area has arisen, and use Google Earth to examine the geomorphology adjacent to these points, and also examine the basin boundary data provided by the HydroSHEDS database. We show that the Pangong Tso watershed and some other areas in the western Tibetan plateau are not part of the UIB, but are areas of internal drainage. Our best estimate of the area of the Upper Indus Basin (at Besham Qila) is 164,867 km2 based on the SRTM DEM, and 164,853 km2 using the ASTER DEM). This matches the catchment area measured by WAPDA SWHP. An important lesson from this investigation is that one should not rely on automated delineation, as iterative pit-filling can produce spurious drainage networks and basins, when there are areas of internal drainage nearby.
NASA Astrophysics Data System (ADS)
Amelang, Jeff
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Common-sense chemistry: The use of assumptions and heuristics in problem solving
NASA Astrophysics Data System (ADS)
Maeyer, Jenine Rachel
Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many interviewees seemed to view chemical reactions as macroscopic reassembling processes where favorability was related to the perceived ease with which reactants broke apart or products formed. Students also expressed spurious chemical assumptions based on the misinterpretation and overgeneralization of periodicity and electronegativity. Our findings suggest the need to create more opportunities for college chemistry students to monitor their thinking, develop and apply analytical ways of reasoning, and evaluate the effectiveness of shortcut reasoning procedures in different contexts.
CODE's new solar radiation pressure model for GNSS orbit determination
NASA Astrophysics Data System (ADS)
Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.
2015-08-01
The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which substantially reduces the spurious signals in the geocenter coordinate (by about a factor of 2-6), reduces the orbit misclosures at the day boundaries by about 10 %, slightly improves the consistency of the estimated ERPs with those of the IERS 08 C04 Earth rotation series, and substantially reduces the systematics in the SLR validation of the GNSS orbits.
Probabilistic multi-catalogue positional cross-match
NASA Astrophysics Data System (ADS)
Pineau, F.-X.; Derriere, S.; Motch, C.; Carrera, F. J.; Genova, F.; Michel, L.; Mingo, B.; Mints, A.; Nebot Gómez-Morán, A.; Rosen, S. R.; Ruiz Camuñas, A.
2017-01-01
Context. Catalogue cross-correlation is essential to building large sets of multi-wavelength data, whether it be to study the properties of populations of astrophysical objects or to build reference catalogues (or timeseries) from survey observations. Nevertheless, resorting to automated processes with limited sets of information available on large numbers of sources detected at different epochs with various filters and instruments inevitably leads to spurious associations. We need both statistical criteria to select detections to be merged as unique sources, and statistical indicators helping in achieving compromises between completeness and reliability of selected associations. Aims: We lay the foundations of a statistical framework for multi-catalogue cross-correlation and cross-identification based on explicit simplified catalogue models. A proper identification process should rely on both astrometric and photometric data. Under some conditions, the astrometric part and the photometric part can be processed separately and merged a posteriori to provide a single global probability of identification. The present paper addresses almost exclusively the astrometrical part and specifies the proper probabilities to be merged with photometric likelihoods. Methods: To select matching candidates in n catalogues, we used the Chi (or, indifferently, the Chi-square) test with 2(n-1) degrees of freedom. We thus call this cross-match a χ-match. In order to use Bayes' formula, we considered exhaustive sets of hypotheses based on combinatorial analysis. The volume of the χ-test domain of acceptance - a 2(n-1)-dimensional acceptance ellipsoid - is used to estimate the expected numbers of spurious associations. We derived priors for those numbers using a frequentist approach relying on simple geometrical considerations. Likelihoods are based on standard Rayleigh, χ and Poisson distributions that we normalized over the χ-test acceptance domain. We validated our theoretical results by generating and cross-matching synthetic catalogues. Results: The results we obtain do not depend on the order used to cross-correlate the catalogues. We applied the formalism described in the present paper to build the multi-wavelength catalogues used for the science cases of the Astronomical Resource Cross-matching for High Energy Studies (ARCHES) project. Our cross-matching engine is publicly available through a multi-purpose web interface. In a longer term, we plan to integrate this tool into the CDS XMatch Service.
6Li in metal-poor halo stars: real or spurious?
NASA Astrophysics Data System (ADS)
Steffen, M.; Cayrel, R.; Bonifacio, P.; Ludwig, H.-G.; Caffau, E.
2010-03-01
The presence of convective motions in the atmospheres of metal-poor halo stars leads to systematic asymmetries of the emergent spectral line profiles. Since such line asymmetries are very small, they can be safely ignored for standard spectroscopic abundance analysis. However, when it comes to the determination of the 6Li/7Li isotopic ratio, q(Li)=n(6Li)/n(7Li), the intrinsic asymmetry of the 7Li line must be taken into account, because its signature is essentially indistinguishable from the presence of a weak 6Li blend in the red wing of the 7Li line. In this contribution we quantity the error of the inferred 6Li/7Li isotopic ratio that arises if the convective line asymmetry is ignored in the fitting of the λ6707 Å lithium blend. Our conclusion is that 6Li/7Li ratios derived by Asplund et al. (2006), using symmetric line profiles, must be reduced by typically Δq(Li) ≈ 0.015. This diminishes the number of certain 6Li detections from 9 to 4 stars or less, casting some doubt on the existence of a 6Li plateau.
A happy conclusion to the SALT image quality saga
NASA Astrophysics Data System (ADS)
Crause, Lisa A.; O'Donoghue, Darragh E.; O'Connor, James E.; Strumpfer, Francois; Strydom, Ockert J.; Sass, Craig; du Plessis, Charl A.; Wiid, Eben; Love, Jonathan; Brink, Janus D.; Wilkinson, Martin; Coetzee, Chris
2012-09-01
Images obtained with the Southern African Large Telescope (SALT) during its commissioning phase showed degradation due to a large focus gradient and a variety of other optical aberrations. An extensive forensic investigation eventually traced the problem to the mechanical interface between the telescope and the secondary optics that form the Spherical Aberration Corrector (SAC). The SAC was brought down from the telescope in 2009 April, the problematic interface was replaced and the four corrector mirrors were optically tested and re-aligned. The surface figures of the SAC mirrors were confirmed to be within specification and a full system test following the re-alignment process yielded a RMS wavefront error of just 0.15 waves. The SAC was re-installed on the tracker in 2010 August and aligned with respect to the payload and primary mirror. Subsequent on-sky tests produced alarming results which were due to spurious signals being sent to the tracker by the auto-collimator, the instrument responsible for controlling the attitude of the SAC with respect to the primary mirror. Once this minor issue was resolved, we obtained uniform 1.1 arcsecond star images over the full 10 arcminute field of view of the telescope.
Bridging the Gap for High-Coherence, Strongly Coupled Superconducting Qubits
NASA Astrophysics Data System (ADS)
Yoder, Jonilyn; Kim, David; Baldo, Peter; Day, Alexandra; Fitch, George; Holihan, Eric; Hover, David; Samach, Gabriel; Weber, Steven; Oliver, William
Crossovers can play a critical role in increasing superconducting qubit device performance, as long as device coherence can be maintained even with the increased fabrication and circuit complexity. Specifically, crossovers can (1) enable a fully-connected ground plane, which reduces spurious modes and crosstalk in the circuit, and (2) increase coupling strength between qubits by facilitating interwoven qubit loops with large mutual inductances. Here we will describe our work at MIT Lincoln Laboratory to integrate superconducting air bridge crossovers into the fabrication of high-coherence capacitively-shunted superconducting flux qubits. We will discuss our process flow for patterning air bridges by resist reflow, and we will describe implementation of air bridges within our circuits. This research was funded in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) and by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.
Gerhart, James; Hall, Brian; Rajan, Kumar B.; Vechiu, Catalina; Canetti, Daphna; Hobfoll, Stevan E.
2017-01-01
Background and Objective This study tested three alternative explanations for research indicating a positive, but heterogeneous relationship between self-reported posttraumatic growth (PTG) and posttraumatic stress symptoms (PSS):a) the third-variable hypothesis that the relationship between PTG and PSS is a spurious one driven by positive relationships with resource loss, b) the growth over time hypothesis that the relationship between PTG and PSS is initially a positive one, but becomes negative over time, and c) the moderator hypothesis that resource loss moderates the relationship between PTG and PSS such that PTG is associated with lower levels of PSS as loss increases. Design and Method A nationally representative sample (N = 1622) of Israelis was assessed at 3 time points during a period of ongoing violence. PTG, resource loss, and the interaction between PTG and loss were examined as lagged predictors of PSS to test the proposed hypotheses. Results Results were inconsistent with all 3 hypotheses, showing that PTG positively predicted subsequent PSS when accounting for main and interactive effects of loss. Conclusions Our results suggest that self-reported PTG is a meaningful but counterintuitive predictor of poorer mental health following trauma. PMID:27575750
Critical assessment and ramifications of a purported marine trophic cascade
NASA Astrophysics Data System (ADS)
Grubbs, R. Dean; Carlson, John K.; Romine, Jason G.; Curtis, Tobey H.; McElroy, W. David; McCandless, Camilla T.; Cotton, Charles F.; Musick, John A.
2016-02-01
When identifying potential trophic cascades, it is important to clearly establish the trophic linkages between predators and prey with respect to temporal abundance, demographics, distribution, and diet. In the northwest Atlantic Ocean, the depletion of large coastal sharks was thought to trigger a trophic cascade whereby predation release resulted in increased cownose ray abundance, which then caused increased predation on and subsequent collapse of commercial bivalve stocks. These claims were used to justify the development of a predator-control fishery for cownose rays, the “Save the Bay, Eat a Ray” fishery, to reduce predation on commercial bivalves. A reexamination of data suggests declines in large coastal sharks did not coincide with purported rapid increases in cownose ray abundance. Likewise, the increase in cownose ray abundance did not coincide with declines in commercial bivalves. The lack of temporal correlations coupled with published diet data suggest the purported trophic cascade is lacking the empirical linkages required of a trophic cascade. Furthermore, the life history parameters of cownose rays suggest they have low reproductive potential and their populations are incapable of rapid increases. Hypothesized trophic cascades should be closely scrutinized as spurious conclusions may negatively influence conservation and management decisions.
Gao, Lin; Zhang, Tongsheng; Wang, Jue; Stephen, Julia
2014-01-01
When connectivity analysis is carried out for event related EEG and MEG, the presence of strong spatial correlations from spontaneous activity in background may mask the local neuronal evoked activity and lead to spurious connections. In this paper, we hypothesized PCA decomposition could be used to diminish the background activity and further improve the performance of connectivity analysis in event related experiments. The idea was tested using simulation, where we found that for the 306-channel Elekta Neuromag system, the first 4 PCs represent the dominant background activity, and the source connectivity pattern after preprocessing is consistent with the true connectivity pattern designed in the simulation. Improving signal to noise of the evoked responses by discarding the first few PCs demonstrates increased coherences at major physiological frequency bands when removing the first few PCs. Furthermore, the evoked information was maintained after PCA preprocessing. In conclusion, it is demonstrated that the first few PCs represent background activity, and PCA decomposition can be employed to remove it to expose the evoked activity for the channels under investigation. Therefore, PCA can be applied as a preprocessing approach to improve neuronal connectivity analysis for event related data. PMID:22918837
Gao, Lin; Zhang, Tongsheng; Wang, Jue; Stephen, Julia
2013-04-01
When connectivity analysis is carried out for event related EEG and MEG, the presence of strong spatial correlations from spontaneous activity in background may mask the local neuronal evoked activity and lead to spurious connections. In this paper, we hypothesized PCA decomposition could be used to diminish the background activity and further improve the performance of connectivity analysis in event related experiments. The idea was tested using simulation, where we found that for the 306-channel Elekta Neuromag system, the first 4 PCs represent the dominant background activity, and the source connectivity pattern after preprocessing is consistent with the true connectivity pattern designed in the simulation. Improving signal to noise of the evoked responses by discarding the first few PCs demonstrates increased coherences at major physiological frequency bands when removing the first few PCs. Furthermore, the evoked information was maintained after PCA preprocessing. In conclusion, it is demonstrated that the first few PCs represent background activity, and PCA decomposition can be employed to remove it to expose the evoked activity for the channels under investigation. Therefore, PCA can be applied as a preprocessing approach to improve neuronal connectivity analysis for event related data.
Critical assessment and ramifications of a purported marine trophic cascade
Grubbs, R. Dean; Carlson, John K; Romine, Jason G.; Curtis, Tobey H; McElroy, W. David; McCandless, Camilla T; Cotton, Charles F; Musick, John A.
2016-01-01
When identifying potential trophic cascades, it is important to clearly establish the trophic linkages between predators and prey with respect to temporal abundance, demographics, distribution, and diet. In the northwest Atlantic Ocean, the depletion of large coastal sharks was thought to trigger a trophic cascade whereby predation release resulted in increased cownose ray abundance, which then caused increased predation on and subsequent collapse of commercial bivalve stocks. These claims were used to justify the development of a predator-control fishery for cownose rays, the “Save the Bay, Eat a Ray” fishery, to reduce predation on commercial bivalves. A reexamination of data suggests declines in large coastal sharks did not coincide with purported rapid increases in cownose ray abundance. Likewise, the increase in cownose ray abundance did not coincide with declines in commercial bivalves. The lack of temporal correlations coupled with published diet data suggest the purported trophic cascade is lacking the empirical linkages required of a trophic cascade. Furthermore, the life history parameters of cownose rays suggest they have low reproductive potential and their populations are incapable of rapid increases. Hypothesized trophic cascades should be closely scrutinized as spurious conclusions may negatively influence conservation and management decisions.
Narth, Christophe; Lagardère, Louis; Polack, Étienne; Gresh, Nohad; Wang, Qiantao; Bell, David R; Rackers, Joshua A; Ponder, Jay W; Ren, Pengyu Y; Piquemal, Jean-Philip
2016-02-15
We propose a general coupling of the Smooth Particle Mesh Ewald SPME approach for distributed multipoles to a short-range charge penetration correction modifying the charge-charge, charge-dipole and charge-quadrupole energies. Such an approach significantly improves electrostatics when compared to ab initio values and has been calibrated on Symmetry-Adapted Perturbation Theory reference data. Various neutral molecular dimers have been tested and results on the complexes of mono- and divalent cations with a water ligand are also provided. Transferability of the correction is adressed in the context of the implementation of the AMOEBA and SIBFA polarizable force fields in the TINKER-HP software. As the choices of the multipolar distribution are discussed, conclusions are drawn for the future penetration-corrected polarizable force fields highlighting the mandatory need of non-spurious procedures for the obtention of well balanced and physically meaningful distributed moments. Finally, scalability and parallelism of the short-range corrected SPME approach are addressed, demonstrating that the damping function is computationally affordable and accurate for molecular dynamics simulations of complex bio- or bioinorganic systems in periodic boundary conditions. Copyright © 2016 Wiley Periodicals, Inc.
Trophic cascade facilitates coral recruitment in a marine reserve
Mumby, Peter J.; Harborne, Alastair R.; Williams, Jodene; Kappel, Carrie V.; Brumbaugh, Daniel R.; Micheli, Fiorenza; Holmes, Katherine E.; Dahlgren, Craig P.; Paris, Claire B.; Blackwell, Paul G.
2007-01-01
Reduced fishing pressure and weak predator–prey interactions within marine reserves can create trophic cascades that increase the number of grazing fishes and reduce the coverage of macroalgae on coral reefs. Here, we show that the impacts of reserves extend beyond trophic cascades and enhance the process of coral recruitment. Increased fish grazing, primarily driven by reduced fishing, was strongly negatively correlated with macroalgal cover and resulted in a 2-fold increase in the density of coral recruits within a Bahamian reef system. Our conclusions are robust because four alternative hypotheses that may generate a spurious correlation between grazing and coral recruitment were tested and rejected. Grazing appears to influence the density and community structure of coral recruits, but no detectable influence was found on the overall size-frequency distribution, community structure, or cover of corals. We interpret this absence of pattern in the adult coral community as symptomatic of the impact of a recent disturbance event that masks the recovery trajectories of individual reefs. Marine reserves are not a panacea for conservation but can facilitate the recovery of corals from disturbance and may help sustain the biodiversity of organisms that depend on a complex three-dimensional coral habitat. PMID:17488824
Greenhill, Lisa M; Carmichael, K Paige
2014-01-01
In April 2011, a nationwide survey of all 28 US veterinary schools was conducted to determine the comfort level (college climate) of veterinary medical students with people from whom they are different. The original hypothesis was that some historically underrepresented students, especially those who may exhibit differences from the predominant race, ethnicity, religion, gender, or sexual orientation, experience a less welcoming college climate. Nearly half of all US students responded to the survey, allowing investigators to make conclusions from the resulting data at a 99% CI with an error rate of less than 2% using Fowler's sample-size formula. Valuable information was captured despite a few study limitations, such as occasional spurious data reporting and little ability to respond in an open-ended manner (most questions had a finite number of allowed responses). The data suggest that while overall the majority of the student population is comfortable in American colleges, some individuals who are underrepresented in veterinary medicine (URVM) may not feel the same level of acceptance or inclusivity on veterinary school campuses. Further examination of these data sets may explain some of the unacceptably lower retention rates of some of these URVM students on campuses.
Critical assessment and ramifications of a purported marine trophic cascade
Grubbs, R. Dean; Carlson, John K.; Romine, Jason G.; Curtis, Tobey H.; McElroy, W. David; McCandless, Camilla T.; Cotton, Charles F.; Musick, John A.
2016-01-01
When identifying potential trophic cascades, it is important to clearly establish the trophic linkages between predators and prey with respect to temporal abundance, demographics, distribution, and diet. In the northwest Atlantic Ocean, the depletion of large coastal sharks was thought to trigger a trophic cascade whereby predation release resulted in increased cownose ray abundance, which then caused increased predation on and subsequent collapse of commercial bivalve stocks. These claims were used to justify the development of a predator-control fishery for cownose rays, the “Save the Bay, Eat a Ray” fishery, to reduce predation on commercial bivalves. A reexamination of data suggests declines in large coastal sharks did not coincide with purported rapid increases in cownose ray abundance. Likewise, the increase in cownose ray abundance did not coincide with declines in commercial bivalves. The lack of temporal correlations coupled with published diet data suggest the purported trophic cascade is lacking the empirical linkages required of a trophic cascade. Furthermore, the life history parameters of cownose rays suggest they have low reproductive potential and their populations are incapable of rapid increases. Hypothesized trophic cascades should be closely scrutinized as spurious conclusions may negatively influence conservation and management decisions. PMID:26876514
Sarafis, Pavlos; Tsounis, Andreas; Malliarou, Maria; Lahana, Eleni
2013-12-20
While medical ethics place a high value on providing truthful information to patients, disclosure practices are far from being the norm in many countries. Transmitting bad news still remains a big problem that health care professionals face in their every day clinical practice. Through the review of relevant literature, an attempt to examine the trends in this issue worldwide will be made. Various electronic databases were searched by the authors and through systematic selection 51 scientific articles were identified that this literature review is based on. There are many parameters that lead to the concealment of truth. Factors related to doctors, patients and their close environment, still maintain a strong resistance against disclosure of diagnosis and prognosis in terminally ill patients, while cultural influences lead to different approaches in various countries. Withholding the truth is mainly based in the fear of causing despair to patients. However, fostering a spurious hope, hides the danger of its' total loss, while it can disturb patient-doctor relationship.
NASA Astrophysics Data System (ADS)
Marras, Simone; Kopera, Michal A.; Constantinescu, Emil M.; Suckale, Jenny; Giraldo, Francis X.
2018-04-01
The high-order numerical solution of the non-linear shallow water equations is susceptible to Gibbs oscillations in the proximity of strong gradients. In this paper, we tackle this issue by presenting a shock capturing model based on the numerical residual of the solution. Via numerical tests, we demonstrate that the model removes the spurious oscillations in the proximity of strong wave fronts while preserving their strength. Furthermore, for coarse grids, it prevents energy from building up at small wave-numbers. When applied to the continuity equation to stabilize the water surface, the addition of the shock capturing scheme does not affect mass conservation. We found that our model improves the continuous and discontinuous Galerkin solutions alike in the proximity of sharp fronts propagating on wet surfaces. In the presence of wet/dry interfaces, however, the model needs to be enhanced with the addition of an inundation scheme which, however, we do not address in this paper.
A supertree of early tetrapods.
Ruta, Marcello; Jeffery, Jonathan E; Coates, Michael I
2003-01-01
A genus-level supertree for early tetrapods is built using a matrix representation of 50 source trees. The analysis of all combined trees delivers a long-stemmed topology in which most taxonomic groups are assigned to the tetrapod stem. A second analysis, which excludes source trees superseded by more comprehensive studies, supports a deep phylogenetic split between lissamphibian and amniote total groups. Instances of spurious groups are rare in both analyses. The results of the pruned second analysis are mostly comparable with those of a recent, character-based and large-scale phylogeny of Palaeozoic tetrapods. Outstanding areas of disagreement include the branching sequence of lepospondyls and the content of the amniote crown group, in particular the placement of diadectomorphs as stem diapsids. Supertrees are unsurpassed in their ability to summarize relationship patterns from multiple independent topologies. Therefore, they might be used as a simple test of the degree of corroboration of nodes in the contributory analyses. However, we urge caution in using them as a replacement for character-based cladograms and for inferring macroevolutionary patterns. PMID:14667343
Ancestry estimation and control of population stratification for sequence-based association studies.
Wang, Chaolong; Zhan, Xiaowei; Bragg-Gresham, Jennifer; Kang, Hyun Min; Stambolian, Dwight; Chew, Emily Y; Branham, Kari E; Heckenlively, John; Fulton, Robert; Wilson, Richard K; Mardis, Elaine R; Lin, Xihong; Swaroop, Anand; Zöllner, Sebastian; Abecasis, Gonçalo R
2014-04-01
Estimating individual ancestry is important in genetic association studies where population structure leads to false positive signals, although assigning ancestry remains challenging with targeted sequence data. We propose a new method for the accurate estimation of individual genetic ancestry, based on direct analysis of off-target sequence reads, and implement our method in the publicly available LASER software. We validate the method using simulated and empirical data and show that the method can accurately infer worldwide continental ancestry when used with sequencing data sets with whole-genome shotgun coverage as low as 0.001×. For estimates of fine-scale ancestry within Europe, the method performs well with coverage of 0.1×. On an even finer scale, the method improves discrimination between exome-sequenced study participants originating from different provinces within Finland. Finally, we show that our method can be used to improve case-control matching in genetic association studies and to reduce the risk of spurious findings due to population structure.
NASA Astrophysics Data System (ADS)
Kioseoglou, George; Hanbicki, Aubrey T.; Sullivan, James M.; van't Erve, Olaf M. J.; Li, Connie H.; Erwin, Steven C.; Mallory, Robert; Yasar, Mesut; Petrou, Athos; Jonker, Berend T.
2004-11-01
The use of carrier spin in semiconductors is a promising route towards new device functionality and performance. Ferromagnetic semiconductors (FMSs) are promising materials in this effort. An n-type FMS that can be epitaxially grown on a common device substrate is especially attractive. Here, we report electrical injection of spin-polarized electrons from an n-type FMS, CdCr2Se4, into an AlGaAs/GaAs-based light-emitting diode structure. An analysis of the electroluminescence polarization based on quantum selection rules provides a direct measure of the sign and magnitude of the injected electron spin polarization. The sign reflects minority rather than majority spin injection, consistent with our density-functional-theory calculations of the CdCr2Se4 conduction-band edge. This approach confirms the exchange-split band structure and spin-polarized carrier population of an FMS, and demonstrates a litmus test for these FMS hallmarks that discriminates against spurious contributions from magnetic precipitates.
Rostoks, Nils; Ramsay, Luke; MacKenzie, Katrin; Cardle, Linda; Bhat, Prasanna R.; Roose, Mikeal L.; Svensson, Jan T.; Stein, Nils; Varshney, Rajeev K.; Marshall, David F.; Graner, Andreas; Close, Timothy J.; Waugh, Robbie
2006-01-01
Genomewide association studies depend on the extent of linkage disequilibrium (LD), the number and distribution of markers, and the underlying structure in populations under study. Outbreeding species generally exhibit limited LD, and consequently, a very large number of markers are required for effective whole-genome association genetic scans. In contrast, several of the world's major food crops are self-fertilizing inbreeding species with narrow genetic bases and theoretically extensive LD. Together these are predicted to result in a combination of low resolution and a high frequency of spurious associations in LD-based studies. However, inbred elite plant varieties represent a unique human-induced pseudooutbreeding population that has been subjected to strong selection for advantageous alleles. By assaying 1,524 genomewide SNPs we demonstrate that, after accounting for population substructure, the level of LD exhibited in elite northwest European barley, a typical inbred cereal crop, can be effectively exploited to map traits by using whole-genome association scans with several hundred to thousands of biallelic SNPs. PMID:17085595
The Inevitability of Assessing Reasons in Debates about Conscientious Objection in Medicine.
Card, Robert F
2017-01-01
This article first critically reviews the major philosophical positions in the literature on conscientious objection and finds that they possess significant flaws. A substantial number of these problems stem from the fact that these views fail to assess the reasons offered by medical professionals in support of their objections. This observation is used to motivate the reasonability view, one part of which states: A practitioner who lodges a conscientious refusal must publicly state his or her objection as well as the reasoned basis for the objection and have these subjected to critical evaluation before a conscientious exemption can be granted (the reason-giving requirement). It is then argued that when defenders of the other philosophical views attempt to avoid granting an accommodation to spurious objections based on discrimination, empirically mistaken beliefs, or other unjustified biases, they are implicitly committed to the reason-giving requirement. This article concludes that based on these considerations, a reason-giving position such as the reasonability view possesses a decisive advantage in this debate.
Immigrant community integration in world cities
Lamanna, Fabio; Lenormand, Maxime; Salas-Olmedo, María Henar; Romanillos, Gustavo; Gonçalves, Bruno
2018-01-01
As a consequence of the accelerated globalization process, today major cities all over the world are characterized by an increasing multiculturalism. The integration of immigrant communities may be affected by social polarization and spatial segregation. How are these dynamics evolving over time? To what extent the different policies launched to tackle these problems are working? These are critical questions traditionally addressed by studies based on surveys and census data. Such sources are safe to avoid spurious biases, but the data collection becomes an intensive and rather expensive work. Here, we conduct a comprehensive study on immigrant integration in 53 world cities by introducing an innovative approach: an analysis of the spatio-temporal communication patterns of immigrant and local communities based on language detection in Twitter and on novel metrics of spatial integration. We quantify the Power of Integration of cities –their capacity to spatially integrate diverse cultures– and characterize the relations between different cultures when acting as hosts or immigrants. PMID:29538383
Visualization of Pulsar Search Data
NASA Astrophysics Data System (ADS)
Foster, R. S.; Wolszczan, A.
1993-05-01
The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.
A stabilized element-based finite volume method for poroelastic problems
NASA Astrophysics Data System (ADS)
Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo
2018-07-01
The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandra, Anirban; Patra, Puneet Kumar; Bhattacharya, Baidurya, E-mail: baidurya@civil.iitkgp.ernet.in
A nanomechanical resonator based sensor works by detecting small changes in the natural frequency of the device in presence of external agents. In this study, we address the length and the temperature-dependent sensitivity of precompressed armchair Boron-Nitride nanotubes towards their use as sensors. The vibrational data, obtained using molecular dynamics simulations, are analyzed for frequency content through the fast Fourier transformation. As the temperature of the system rises, the vibrational spectrum becomes noisy, and the modal frequencies show a red-shift irrespective of the length of the nanotube, suggesting that the nanotube based sensors calibrated at a particular temperature may notmore » function desirably at other temperatures. Temperature-induced noise becomes increasingly pronounced with the decrease in the length of the nanotube. For the shorter nanotube at higher temperatures, we observe multiple closely spaced peaks near the natural frequency, that create a masking effect and reduce the sensitivity of detection. However, longer nanotubes do not show these spurious frequencies, and are considerably more sensitive than the shorter ones.« less
Zhao, Wen; Ma, Hong; Zhang, Hua; Jin, Jiang; Dai, Gang; Hu, Lin
2017-01-01
The cognitive radio wireless sensor network (CR-WSN) is experiencing more and more attention for its capacity to automatically extract broadband instantaneous radio environment information. Obtaining sufficient linearity and spurious-free dynamic range (SFDR) is a significant premise of guaranteeing sensing performance which, however, usually suffers from the nonlinear distortion coming from the broadband radio frequency (RF) front-end in the sensor node. Moreover, unlike other existing methods, the joint effect of non-constant group delay distortion and nonlinear distortion is discussed, and its corresponding solution is provided in this paper. After that, the nonlinearity mitigation architecture based on best delay searching is proposed. Finally, verification experiments, both on simulation signals and signals from real-world measurement, are conducted and discussed. The achieved results demonstrate that with best delay searching, nonlinear distortion can be alleviated significantly and, in this way, spectrum sensing performance is more reliable and accurate. PMID:28956860
A High Order Finite Difference Scheme with Sharp Shock Resolution for the Euler Equations
NASA Technical Reports Server (NTRS)
Gerritsen, Margot; Olsson, Pelle
1996-01-01
We derive a high-order finite difference scheme for the Euler equations that satisfies a semi-discrete energy estimate, and present an efficient strategy for the treatment of discontinuities that leads to sharp shock resolution. The formulation of the semi-discrete energy estimate is based on a symmetrization of the Euler equations that preserves the homogeneity of the flux vector, a canonical splitting of the flux derivative vector, and the use of difference operators that satisfy a discrete analogue to the integration by parts procedure used in the continuous energy estimate. Around discontinuities or sharp gradients, refined grids are created on which the discrete equations are solved after adding a newly constructed artificial viscosity. The positioning of the sub-grids and computation of the viscosity are aided by a detection algorithm which is based on a multi-scale wavelet analysis of the pressure grid function. The wavelet theory provides easy to implement mathematical criteria to detect discontinuities, sharp gradients and spurious oscillations quickly and efficiently.
Multi-octave analog photonic link with improved second- and third-order SFDRs
NASA Astrophysics Data System (ADS)
Tan, Qinggui; Gao, Yongsheng; Fan, Yangyu; He, You
2018-03-01
The second- and third-order spurious free dynamic ranges (SFDRs) are two key performance indicators for a multi-octave analogy photonic link (APL). The linearization methods for either second- or third-order intermodulation distortion (IMD2 or IMD3) have been intensively studied, but the simultaneous suppression for the both were merely reported. In this paper, we propose an APL with improved second- and third-order SFDRs for multi-octave applications based on two parallel DPMZM-based sub-APLs. The IMD3 in each sub-APL is suppressed by properly biasing the DPMZM, and the IMD2 is suppressed by balanced detecting the two sub-APLs. The experiment demonstrates significant suppression ratios for both the IMD2 and IMD3 after linearization in the proposed link, and the measured second- and third-order SFDRs with the operating frequency from 6 to 40 GHz are above 91 dB ṡHz 1 / 2 and 116 dB ṡHz 2 / 3, respectively.
Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns
2013-01-01
Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810
Can the Lorenz-Gauge Potentials Be Considered Physical Quantities?
ERIC Educational Resources Information Center
Heras, Jose A.; Fernandez-Anaya, Guillermo
2010-01-01
Two results support the idea that the scalar and vector potentials in the Lorenz gauge can be considered to be physical quantities: (i) they separately satisfy the properties of causality and propagation at the speed of light and do not imply spurious terms and (ii) they can naturally be written in a manifestly covariant form. In this paper we…
ERIC Educational Resources Information Center
Dussault, Frederic; Brendgen, Mara; Vitaro, Frank; Wanner, Brigitte; Tremblay, Richard E.
2011-01-01
Background: Research shows high co-morbidity between gambling problems and depressive symptoms, but the directionality of this link is unclear. Moreover, the co-occurrence of gambling problems and depressive symptoms could be spurious and explained by common underlying risk factors such as impulsivity and socio-family risk. The goals of the…
ERIC Educational Resources Information Center
Savolainen, Jukka; Mason, W. Alex; Hughes, Lorine A.; Ebeling, Hanna; Hurtig, Tuula M.; Taanila, Anja M.
2015-01-01
There are strong reasons to assume that early onset of puberty accelerates coital debut among adolescent girls. Although many studies support this assumption, evidence regarding the putative causal processes is limited and inconclusive. In this research, longitudinal data from the 1986 Northern Finland Birth Cohort Study (N = 2,596) were used to…
Principles of Air Defense and Air Vehicle Penetration
2000-03-01
Range For reliable dateetien, the target signal must reach some minimum or threshold value called S . . When internal noise is the only interfer...analyze air defense and air vehicle penetration. Unique expected value models are developed with frequent numerical examples. Radar...penetrator in the presence of spurious returns from internal and external noise will be discussed. Tracking With sufficient sensor information to determine
New Students' Peer Integration and Exposure to Deviant Peers: Spurious Effects of School Moves?
ERIC Educational Resources Information Center
Siennick, Sonja E.; Widdowson, Alex O.; Ragan, Daniel T.
2017-01-01
School moves during adolescence predict lower peer integration and higher exposure to delinquent peers. Yet mobility and peer problems have several common correlates, so differences in movers' and non-movers' social adjustment may be due to selection rather than causal effects of school moves. Drawing on survey and social network data from a…
Reduction of Photodiode Nonlinearities by Adaptive Biasing
2016-10-14
2016 Approved for public release; distribution is unlimited. Meredith N. hutchiNsoN Nicholas J. Frigo Photonics Technology Branch Optical Sciences...Unclassified Unlimited Unclassified Unlimited 19 Meredith N. Hutchinson (202) 767-9549 Fiber optics Analog photonics RF photonic links impress information...to nonlinearities. These spurious tones masquerade as signals and impair the performance of the photonic link. Earlier research has shown the
Speckle averaging system for laser raster-scan image projection
Tiszauer, D.H.; Hackel, L.A.
1998-03-17
The viewers` perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts. 5 figs.
Speckle averaging system for laser raster-scan image projection
Tiszauer, Detlev H.; Hackel, Lloyd A.
1998-03-17
The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.
Low-Noise Implantable Electrode
NASA Technical Reports Server (NTRS)
Lund, G. F.
1982-01-01
New implantable electrocardiogram electrode much less sensitive than previous designs to spurious biological potentials. Designed in novel "pocket" configuration, new electrode is intended as sensor for radiotelemetry of biological parameters in experiments on unrestrained subjects. Electrode is esentially squashed cylinder that admits body fluid into interior. Cylinder and electrical lead are made of stainless steel. Spot welding and crimping are used for assembly, rather than soldering.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
Oblique radiation lateral open boundary conditions for a regional climate atmospheric model
NASA Astrophysics Data System (ADS)
Cabos Narvaez, William; De Frutos Redondo, Jose Antonio; Perez Sanz, Juan Ignacio; Sein, Dmitry
2013-04-01
The prescription of lateral boundary conditions in regional atmospheric models represent a very important issue for limited area models. The ill-posed nature of the open boundary conditions makes it necessary to devise schemes in order to filter spurious wave reflections at boundaries, being desirable to have one boundary condition per variable. On the other side, due to the essentially hyperbolic nature of the equations solved in state of the art atmospheric models, external data is required only for inward boundary fluxes. These circumstances make radiation lateral boundary conditions a good choice for the filtering of spurious wave reflections. Here we apply the adaptive oblique radiation modification proposed by Mikoyada and Roseti to each of the prognostic variables of the REMO regional atmospheric model and compare it to the more common normal radiation condition used in REMO. In the proposed scheme, special attention is paid to the estimation of the radiation phase speed, essential to detecting the direction of boundary fluxes. One of the differences with the classical scheme is that in case of outward propagation, the adaptive nudging imposed in the boundaries allows to minimize under and over specifications problems, adequately incorporating the external information.
Pulsation Properties of Carbon and Oxygen Red Giants
NASA Astrophysics Data System (ADS)
Percy, J. R.; Huang, D. J.
2015-07-01
We have used up to 12 decades of AAVSO visual observations, and the AAVSO VSTAR software package to determine new and/or improved periods of 5 pulsating biperiodic carbon (C-type) red giants, and 12 pulsating biperiodic oxygen (M-type) red giants. We have also determined improved periods for 43 additional C-type red giants, in part to search for more biperiodic C-type stars, and also for 46 M-type red giants. For a small sample of the biperiodic C-type and M-type stars, we have used wavelet analysis to determine the time scales of the cycles of amplitude increase and decrease. The C-type and M-type stars do not differ significantly in their period ratios (first overtone to fundamental). There is a marginal difference in the lengths of their amplitude cycles. The most important result of this study is that, because of the semiregularity of these stars, and the presence of alias, harmonic, and spurious periods, the periods which we and others derive for these stars—especially the smaller-amplitude ones—must be determined and interpreted with great care and caution. For instance: spurious periods of a year can produce an apparent excess of stars, at that period, in the period distribution.
Suppressing Ghost Diffraction in E-Beam-Written Gratings
NASA Technical Reports Server (NTRS)
Wilson, Daniel; Backlund, Johan
2009-01-01
A modified scheme for electron-beam (E-beam) writing used in the fabrication of convex or concave diffraction gratings makes it possible to suppress the ghost diffraction heretofore exhibited by such gratings. Ghost diffraction is a spurious component of diffraction caused by a spurious component of grating periodicity as described below. The ghost diffraction orders appear between the main diffraction orders and are typically more intense than is the diffuse scattering from the grating. At such high intensity, ghost diffraction is the dominant source of degradation of grating performance. The pattern of a convex or concave grating is established by electron-beam writing in a resist material coating a substrate that has the desired convex or concave shape. Unfortunately, as a result of the characteristics of electrostatic deflectors used to control the electron beam, it is possible to expose only a small field - typically between 0.5 and 1.0 mm wide - at a given fixed position of the electron gun relative to the substrate. To make a grating larger than the field size, it is necessary to move the substrate to make it possible to write fields centered at different positions, so that the larger area is synthesized by "stitching" the exposed fields.
ADOLESCENT WORK INTENSITY, SCHOOL PERFORMANCE, AND ACADEMIC ENGAGEMENT.
Staff, Jeremy; Schulenberg, John E; Bachman, Jerald G
2010-07-01
Teenagers working over 20 hours per week perform worse in school than youth who work less. There are two competing explanations for this association: (1) that paid work takes time and effort away from activities that promote achievement, such as completing homework, preparing for examinations, getting help from parents and teachers, and participating in extracurricular activities; and (2) that the relationship between paid work and school performance is spurious, reflecting preexisting differences between students in academic ability, motivation, and school commitment. Using longitudinal data from the ongoing national Monitoring the Future project, this research examines the impact of teenage employment on school performance and academic engagement during the 8th, 10th, and 12th grades. We address issues of spuriousness by using a two-level hierarchical model to estimate the relationships of within-individual changes in paid work to changes in school performance and other school-related measures. Unlike prior research, we also compare youth school performance and academic orientation when they are actually working in high-intensity jobs to when they are jobless and wish to work intensively. Results indicate that the mere wish for intensive work corresponds with academic difficulties in a manner similar to actual intensive work.
AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less
Energy Models for One-Carrier Transport in Semiconductor Devices
NASA Technical Reports Server (NTRS)
Jerome, Joseph W.; Shu, Chi-Wang
1991-01-01
Moment models of carrier transport, derived from the Boltzmann equation, made possible the simulation of certain key effects through such realistic assumptions as energy dependent mobility functions. This type of global dependence permits the observation of velocity overshoot in the vicinity of device junctions, not discerned via classical drift-diffusion models, which are primarily local in nature. It was found that a critical role is played in the hydrodynamic model by the heat conduction term. When ignored, the overshoot is inappropriately damped. When the standard choice of the Wiedemann-Franz law is made for the conductivity, spurious overshoot is observed. Agreement with Monte-Carlo simulation in this regime required empirical modification of this law, or nonstandard choices. Simulations of the hydrodynamic model in one and two dimensions, as well as simulations of a newly developed energy model, the RT model, are presented. The RT model, intermediate between the hydrodynamic and drift-diffusion model, was developed to eliminate the parabolic energy band and Maxwellian distribution assumptions, and to reduce the spurious overshoot with physically consistent assumptions. The algorithms employed for both models are the essentially non-oscillatory shock capturing algorithms. Some mathematical results are presented and contrasted with the highly developed state of the drift-diffusion model.
Bagarinao, Epifanio; Tsuzuki, Erina; Yoshida, Yukina; Ozawa, Yohei; Kuzuya, Maki; Otani, Takashi; Koyama, Shuji; Isoda, Haruo; Watanabe, Hirohisa; Maesawa, Satoshi; Naganawa, Shinji; Sobue, Gen
2018-01-01
The stability of the MRI scanner throughout a given study is critical in minimizing hardware-induced variability in the acquired imaging data set. However, MRI scanners do malfunction at times, which could generate image artifacts and would require the replacement of a major component such as its gradient coil. In this article, we examined the effect of low intensity, randomly occurring hardware-related noise due to a faulty gradient coil on brain morphometric measures derived from T1-weighted images and resting state networks (RSNs) constructed from resting state functional MRI. We also introduced a method to detect and minimize the effect of the noise associated with a faulty gradient coil. Finally, we assessed the reproducibility of these morphometric measures and RSNs before and after gradient coil replacement. Our results showed that gradient coil noise, even at relatively low intensities, could introduce a large number of voxels exhibiting spurious significant connectivity changes in several RSNs. However, censoring the affected volumes during the analysis could minimize, if not completely eliminate, these spurious connectivity changes and could lead to reproducible RSNs even after gradient coil replacement.
Bagarinao, Epifanio; Tsuzuki, Erina; Yoshida, Yukina; Ozawa, Yohei; Kuzuya, Maki; Otani, Takashi; Koyama, Shuji; Isoda, Haruo; Watanabe, Hirohisa; Maesawa, Satoshi; Naganawa, Shinji; Sobue, Gen
2018-01-01
The stability of the MRI scanner throughout a given study is critical in minimizing hardware-induced variability in the acquired imaging data set. However, MRI scanners do malfunction at times, which could generate image artifacts and would require the replacement of a major component such as its gradient coil. In this article, we examined the effect of low intensity, randomly occurring hardware-related noise due to a faulty gradient coil on brain morphometric measures derived from T1-weighted images and resting state networks (RSNs) constructed from resting state functional MRI. We also introduced a method to detect and minimize the effect of the noise associated with a faulty gradient coil. Finally, we assessed the reproducibility of these morphometric measures and RSNs before and after gradient coil replacement. Our results showed that gradient coil noise, even at relatively low intensities, could introduce a large number of voxels exhibiting spurious significant connectivity changes in several RSNs. However, censoring the affected volumes during the analysis could minimize, if not completely eliminate, these spurious connectivity changes and could lead to reproducible RSNs even after gradient coil replacement. PMID:29725294
Some conservation issues for the dynamical cores of NWP and climate models
NASA Astrophysics Data System (ADS)
Thuburn, J.
2008-03-01
The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.