High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg
NASA Astrophysics Data System (ADS)
Dunlop, Michelle
2017-09-01
High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.
Precision experiments on mirror transitions at Notre Dame
NASA Astrophysics Data System (ADS)
Brodeur, Maxime; TwinSol Collaboration
2016-09-01
Thanks to extensive experimental efforts that led to a precise determination of important experimental quantities of superallowed pure Fermi transitions, we now have a very precise value for Vud that leads to a stringent test of the CKM matrix unitarity. Despite this achievement, measurements in other systems remain relevant as conflicting results could uncover unknown systematic effects or even new physics. One such system is the superallowed mixed transition, which can help refine theoretical corrections used for pure Fermi transitions and improve the accuracy of Vud. However, as a corrected Ft-value determination from these systems requires the more challenging determination of the Fermi Gamow-Teller mixing ratio, only five transitions, spreading from 19Ne to 37Ar, are currently fully characterized. To rectify the situation, an experimental program on precision experiment of mirror transitions that includes precision half-life measurements, and in the future, the determination of the Fermi Gamow-Teller mixing ratio, has started at the University of Notre Dame. This work is supported in part by the National Science Foundation.
USDA-ARS?s Scientific Manuscript database
Controlling for spatial variability is important in high-throughput phenotyping studies that enable large numbers of genotypes to be evaluated across time and space. In the current study, we compared the efficacy of different experimental designs and spatial models in the analysis of canopy spectral...
Superallowed Fermi β-Decay Studies with SCEPTAR and the 8π Gamma-Ray Spectrometer
NASA Astrophysics Data System (ADS)
Koopmans, K. A.
2005-04-01
The 8π Gamma-Ray Spectrometer, operating at TRIUMF in Vancouver Canada, is a high-precision instrument for detecting the decay radiations from exotic nuclei. In 2003, a new beta-scintillating array called SCEPTAR was installed within the 8π Spectrometer. With these two systems, precise measurements of half-lives and branching ratios can be made, specifically on certain nuclei which exhibit Superallowed Fermi 0+ → 0+ β-decay. These data can be used to determine the value of δC, an isospin symmetry-breaking (Coulomb) correction factor to good precision. As this correction factor is currently one of the leading sources of error in the unitarity test of the CKM matrix, a precise determination of its value could help to eliminate any possible "trivial" explanation of the seeming departure of current experimental data from Standard Model predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majhi, S.K., E-mail: tpskm@iacs.res.in; Mukhopadhyay, A., E-mail: aditi_mukhopadhyay@baylor.edu; Ward, B.F.L., E-mail: bfl_ward@baylor.edu
2014-11-15
We present a phenomenological study of the current status of the application of our approach of exact amplitude-based resummation in quantum field theory to precision QCD calculations, by realistic MC event generator methods, as needed for precision LHC physics. We discuss recent results as they relate to the interplay of the attendant IR-improved DGLAP-CS theory of one of us and the precision of exact NLO matrix-element matched parton shower MC’s in the Herwig6.5 environment as determined by comparison to recent LHC experimental observations on single heavy gauge boson production and decay. The level of agreement between the new theory andmore » the data continues to be a reason for optimism. In the spirit of completeness, we discuss as well other approaches to the same theoretical predictions that we make here from the standpoint of physical precision with an eye toward the (sub-)1% QCD⊗EW total theoretical precision regime for LHC physics. - Highlights: • Using LHC data, we show that IR-improved DGLAP-CS kernels with exact NLO Shower/ME matching improves MC precision. • We discuss other possible approaches in comparison with ours. • We propose experimental tests to discriminate between competing approaches.« less
A novel comparison of Møller and Compton electron-beam polarimeters
Magee, J. A.; Narayan, A.; Jones, D.; ...
2017-01-19
We have performed a novel comparison between electron-beam polarimeters based on Moller and Compton scattering. A sequence of electron-beam polarization measurements were performed at low beam currents (more » $<$ 5 $$\\mu$$A) during the $$Q_{\\rm weak}$$ experiment in Hall C at Jefferson Lab. These low current measurements were bracketed by the regular high current (180 $$\\mu$$A) operation of the Compton polarimeter. All measurements were found to be consistent within experimental uncertainties of 1% or less, demonstrating that electron polarization does not depend significantly on the beam current. This result lends confidence to the common practice of applying Moller measurements made at low beam currents to physics experiments performed at higher beam currents. Here, the agreement between two polarimetry techniques based on independent physical processes sets an important benchmark for future precision asymmetry measurements that require sub-1% precision in polarimetry.« less
A novel comparison of Møller and Compton electron-beam polarimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magee, J. A.; Narayan, A.; Jones, D.
We have performed a novel comparison between electron-beam polarimeters based on Moller and Compton scattering. A sequence of electron-beam polarization measurements were performed at low beam currents (more » $<$ 5 $$\\mu$$A) during the $$Q_{\\rm weak}$$ experiment in Hall C at Jefferson Lab. These low current measurements were bracketed by the regular high current (180 $$\\mu$$A) operation of the Compton polarimeter. All measurements were found to be consistent within experimental uncertainties of 1% or less, demonstrating that electron polarization does not depend significantly on the beam current. This result lends confidence to the common practice of applying Moller measurements made at low beam currents to physics experiments performed at higher beam currents. Here, the agreement between two polarimetry techniques based on independent physical processes sets an important benchmark for future precision asymmetry measurements that require sub-1% precision in polarimetry.« less
Conceptual and Preliminary Design of a Low-Cost Precision Aerial Delivery System
2016-06-01
test results. It includes an analysis of the failure modes encountered during flight experimentation , methodology used for conducting coordinate...and experimentation . Additionally, the current and desired end state of the research is addressed. Finally, this chapter outlines the methodology ...preliminary design phases are utilized to investigate and develop a potentially low-cost alternative to existing systems. Using an Agile methodology
The Too-Much-Precision Effect.
Loschelder, David D; Friese, Malte; Schaerer, Michael; Galinsky, Adam D
2016-12-01
Past research has suggested a fundamental principle of price precision: The more precise an opening price, the more it anchors counteroffers. The present research challenges this principle by demonstrating a too-much-precision effect. Five experiments (involving 1,320 experts and amateurs in real-estate, jewelry, car, and human-resources negotiations) showed that increasing the precision of an opening offer had positive linear effects for amateurs but inverted-U-shaped effects for experts. Anchor precision backfired because experts saw too much precision as reflecting a lack of competence. This negative effect held unless first movers gave rationales that boosted experts' perception of their competence. Statistical mediation and experimental moderation established the critical role of competence attributions. This research disentangles competing theoretical accounts (attribution of competence vs. scale granularity) and qualifies two putative truisms: that anchors affect experts and amateurs equally, and that more precise prices are linearly more potent anchors. The results refine current theoretical understanding of anchoring and have significant implications for everyday life.
NASA Astrophysics Data System (ADS)
Lehner, Christoph
2018-03-01
In this talk I present the current status of a precise first-principles calculation of the quark connected, quark disconnected, and leading QED and strong isospin-breaking contributions to the leading-order hadronic vacuum polarization by the RBC and UKQCD collaborations. The lattice data is also combined with experimental e+e- scattering data, consistency between the two datasets is checked, and a combined result with smaller error than the lattice data and e+e- scattering data individually is presented.
Testing the Standard Model by precision measurement of the weak charges of quarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross Young; Roger Carlini; Anthony Thomas
In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.
Testing the standard model by precision measurement of the weak charges of quarks.
Young, R D; Carlini, R D; Thomas, A W; Roche, J
2007-09-21
In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low energy. The precision of this new result, combined with earlier atomic parity-violation measurements, places tight constraints on the size of possible contributions from physics beyond the standard model. Consequently, this result improves the lower-bound on the scale of relevant new physics to approximately 1 TeV.
High spatial precision nano-imaging of polarization-sensitive plasmonic particles
NASA Astrophysics Data System (ADS)
Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice
2018-02-01
Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.
Hu, Chao; Wang, Qianxin; Wang, Zhongyuan; Hernández Moraleda, Alberto
2018-01-01
Currently, five new-generation BeiDou (BDS-3) experimental satellites are working in orbit and broadcast B1I, B3I, and other new signals. Precise satellite orbit determination of the BDS-3 is essential for the future global services of the BeiDou system. However, BDS-3 experimental satellites are mainly tracked by the international GNSS Monitoring and Assessment Service (iGMAS) network. Under the current constraints of the limited data sources and poor data quality of iGMAS, this study proposes an improved cycle-slip detection and repair algorithm, which is based on a polynomial prediction of ionospheric delays. The improved algorithm takes the correlation of ionospheric delays into consideration to accurately estimate and repair cycle slips in the iGMAS data. Moreover, two methods of BDS-3 experimental satellite orbit determination, namely, normal equation stacking (NES) and step-by-step (SS), are designed to strengthen orbit estimations and to make full use of the BeiDou observations in different tracking networks. In addition, a method to improve computational efficiency based on a matrix eigenvalue decomposition algorithm is derived in the NES. Then, one-year of BDS-3 experimental satellite precise orbit determinations were conducted based on iGMAS and Multi-GNSS Experiment (MGEX) networks. Furthermore, the orbit accuracies were analyzed from the discrepancy of overlapping arcs and satellite laser range (SLR) residuals. The results showed that the average three-dimensional root-mean-square error (3D RMS) of one-day overlapping arcs for BDS-3 experimental satellites (C31, C32, C33, and C34) acquired by NES and SS are 31.0, 36.0, 40.3, and 50.1 cm, and 34.6, 39.4, 43.4, and 55.5 cm, respectively; the RMS of SLR residuals are 55.1, 49.6, 61.5, and 70.9 cm and 60.5, 53.6, 65.8, and 73.9 cm, respectively. Finally, one month of observations were used in four schemes of BDS-3 experimental satellite orbit determination to further investigate the reliability and advantages of the improved methods. It was suggested that the scheme with improved cycle-slip detection and repair algorithm based on NES was optimal, which improved the accuracy of BDS-3 experimental satellite orbits by 34.07%, 41.05%, 72.29%, and 74.33%, respectively, compared with the widely-used strategy. Therefore, improved methods for the BDS-3 experimental satellites proposed in this study are very beneficial for the determination of new-generation BeiDou satellite precise orbits. PMID:29724062
g-Factor of heavy ions: a new access to the fine structure constant.
Shabaev, V M; Glazov, D A; Oreshkina, N S; Volotka, A V; Plunien, G; Kluge, H-J; Quint, W
2006-06-30
A possibility for a determination of the fine structure constant in experiments on the bound-electron g-factor is examined. It is found that studying a specific difference of the g-factors of B- and H-like ions of the same spinless isotope in the Pb region to the currently accessible experimental accuracy of 7 x 10(-10) would lead to a determination of the fine structure constant to an accuracy which is better than that of the currently accepted value. Further improvements of the experimental and theoretical accuracy could provide a value of the fine structure constant which is several times more precise than the currently accepted one.
Lattice Calculations and the Muon Anomalous Magnetic Moment
NASA Astrophysics Data System (ADS)
Marinković, Marina Krstić
2017-07-01
Anomalous magnetic moment of the muon, a_{μ }=(g_{μ }-2)/2, is one of the most precisely measured quantities in particle physics and it provides a stringent test of the Standard Model. The planned improvements of the experimental precision at Fermilab and at J-PARC propel further reduction of the theoretical uncertainty of a_{μ }. The hope is that the efforts on both sides will help resolve the current discrepancy between the experimental measurement of a_{μ } and its theoretical prediction, and potentially gain insight into new physics. The dominant sources of the uncertainty in the theoretical prediction of a_{μ } are the errors of the hadronic contributions. I will discuss recent progress on determination of hadronic contributions to a_{μ } from lattice calculations.
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
The effect of conductor permeability on electric current transducers
NASA Astrophysics Data System (ADS)
Mirzaei, M.; Ripka, P.; Chirtsov, A.; Kaspar, P.; Vyhnanek, J.
2018-04-01
In this paper, experimental works and theoretical analysis are presented to analyze the influence of the conductor permeability on the precision of yokeless current sensors. The results of finite-element method (FEM) fit well the measured field values around the conductor. Finally we evaluate the difference in magnetic fields distribution around non-magnetic and magnetic conductor. The calculated values show that the permeability of the ferromagnetic conductor significally affects the reading of the electric current sensors even at DC.
Detection of non-Gaussian fluctuations in a quantum point contact.
Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M
2008-07-04
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
Detection of Non-Gaussian Fluctuations in a Quantum Point Contact
NASA Astrophysics Data System (ADS)
Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.
2008-07-01
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
What can we learn from noise? — Mesoscopic nonequilibrium statistical physics —
KOBAYASHI, Kensuke
2016-01-01
Mesoscopic systems — small electric circuits working in quantum regime — offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics. PMID:27477456
What can we learn from noise? - Mesoscopic nonequilibrium statistical physics.
Kobayashi, Kensuke
2016-01-01
Mesoscopic systems - small electric circuits working in quantum regime - offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics.
1983-06-03
current not power. inspection groups . The experimental procedure for the resistance TABLE I non -linearity inspection will be to condition the crystal...comparatively small [24]. By eali effect, the precision with which the effect micht controlling the experimental conditions we estimate be controlled ...intended to be a accepted on an individual basis for Group A predictor of long term performance. It is another testing. check on process control
Progress Towards a High-Precision Infrared Spectroscopic Survey of the H_3^+ Ion
NASA Astrophysics Data System (ADS)
Perry, Adam J.; Hodges, James N.; Markus, Charles R.; Kocheril, G. Stephen; Jenkins, Paul A., II; McCall, Benjamin J.
2015-06-01
The trihydrogen cation, H_3^+, represents one of the most important and fundamental molecular systems. Having only two electrons and three nuclei, H_3^+ is the simplest polyatomic system and is a key testing ground for the development of new techniques for calculating potential energy surfaces and predicting molecular spectra. Corrections that go beyond the Born-Oppenheimer approximation, including adiabatic, non-adiabatic, relativistic, and quantum electrodynamic corrections are becoming more feasible to calculate. As a result, experimental measurements performed on the H_3^+ ion serve as important benchmarks which are used to test the predictive power of new computational methods. By measuring many infrared transitions with precision at the sub-MHz level it is possible to construct a list of the most highly precise experimental rovibrational energy levels for this molecule. Until recently, only a select handful of infrared transitions of this molecule have been measured with high precision (˜ 1 MHz). Using the technique of Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, we are aiming to produce the largest high-precision spectroscopic dataset for this molecule to date. Presented here are the current results from our survey along with a discussion of the combination differences analysis used to extract the experimentally determined rovibrational energy levels. O. Polyansky, et al., Phil. Trans. R. Soc. A (2012), 370, 5014. M. Pavanello, et al., J. Chem. Phys. (2012), 136, 184303. L. Diniz, et al., Phys. Rev. A (2013), 88, 032506. L. Lodi, et al., Phys. Rev. A (2014), 89, 032505. J. Hodges, et al., J. Chem. Phys (2013), 139, 164201.
Schmidt, Susanne; Seiberl, Wolfgang; Schwirtz, Ansgar
2015-01-01
Ergonomic design requirements are needed to develop optimum vehicle interfaces for the driver. The majority of the current specifications consider only anthropometric conditions and subjective evaluations of comfort. This paper examines specific biomechanical aspects to improve the current ergonomic requirements. Therefore, a research which involved 40 subjects was carried out to obtain more knowledge in the field of steering movement while driving a car. Five different shoulder-elbow joint configurations were analyzed using a driving simulator to find optimum posture for driving in respect of steering precision and steering velocity. Therefore, a 20 s precision test and a test to assess maximum steering velocity over a range of 90° steering motion have been conducted. The results show that driving precision, as well as maximum steering velocity, are significantly increased in mid-positions (elbow angles of 95° and 120°) compared to more flexed (70°) or extended (145° and 160°) postures. We conclude that driver safety can be enhanced by implementing these data in the automotive design process because faster and highly precise steering can be important during evasive actions and in accident situations. In addition, subjective comfort rating, analyzed with questionnaires, confirmed experimental results. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
HTS flux concentrator for non-invasive sensing of charged particle beams
NASA Astrophysics Data System (ADS)
Hao, L.; Gallop, J. C.; Macfarlane, J. C.; Carr, C.; Donaldson, G. B.
2001-12-01
The principle of the superconducting cryogenic current comparator (CCC) is applied to the non-invasive sensing of charged-particle beams (ions, electrons). With the use of HTS components it is feasible to envisage applications, for example, in precision mass spectrometry and real-time monitoring of ion-beam implantation currents. Recent simulations and experimental measurements of the flux concentration ratio, frequency response and linearity of a prototype HTS-CCC operating at 77 K are described.
HTS cryogenic current comparator for non-invasive sensing of charged-particle beams
NASA Astrophysics Data System (ADS)
Hao, L.; Gallop, J. C.; Macfarlane, J. C.; Carr, C.
2002-03-01
The principle of the superconducting cryogenic direct-current comparator (CCC) is applied to the non-invasive sensing of charged-particle beams (ions, electrons). With the use of HTS components it is feasible to envisage applications, for example, in precision mass spectrometry, in real-time monitoring of ion-beam implantation currents and for the determination of the Faraday fundamental constant. We have developed a novel current concentrating technique using HTS thick-film material, to increase the sensitivity of the CCC. Recent simulations and experimental measurements of the flux and current concentration ratios, frequency response and linearity of a prototype HTS-CCC operating at 77 K are described.
Recent progress of laser spectroscopy experiments on antiprotonic helium
NASA Astrophysics Data System (ADS)
Hori, Masaki
2018-03-01
The Atomic Spectroscopy and Collisions Using Slow Antiprotons (ASACUSA) collaboration is currently carrying out laser spectroscopy experiments on antiprotonic helium ? atoms at CERN's Antiproton Decelerator facility. Two-photon spectroscopic techniques have been employed to reduce the Doppler width of the measured ? resonance lines, and determine the atomic transition frequencies to a fractional precision of 2.3-5 parts in 109. More recently, single-photon spectroscopy of buffer-gas cooled ? has reached a similar precision. By comparing the results with three-body quantum electrodynamics calculations, the antiproton-to-electron mass ratio was determined as ?, which agrees with the known proton-to-electron mass ratio with a precision of 8×10-10. The high-quality antiproton beam provided by the future Extra Low Energy Antiproton Ring (ELENA) facility should enable further improvements in the experimental precision. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.
Developing the Precision Magnetic Field for the E989 Muon g{2 Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Matthias W.
The experimental value ofmore » $$(g\\hbox{--}2)_\\mu$$ historically has been and contemporarily remains an important probe into the Standard Model and proposed extensions. Previous measurements of $$(g\\hbox{--}2)_\\mu$$ exhibit a persistent statistical tension with calculations using the Standard Model implying that the theory may be incomplete and constraining possible extensions. The Fermilab Muon g-2 experiment, E989, endeavors to increase the precision over previous experiments by a factor of four and probe more deeply into the tension with the Standard Model. The $$(g\\hbox{--}2)_\\mu$$ experimental implementation measures two spin precession frequencies defined by the magnetic field, proton precession and muon precession. The value of $$(g\\hbox{--}2)_\\mu$$ is derived from a relationship between the two frequencies. The precision of magnetic field measurements and the overall magnetic field uniformity achieved over the muon storage volume are then two undeniably important aspects of the e xperiment in minimizing uncertainty. The current thesis details the methods employed to achieve magnetic field goals and results of the effort.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perin, A.; Casas-Cubillos, J.; Pezzetti, M.
2014-01-29
The 600 A and 120 A circuits of the inner triplet magnets of the Large Hadron Collider are powered by resistive gas cooled current leads. The current solution for controlling the gas flow of these leads has shown severe operability limitations. In order to allow a more precise and more reliable control of the cooling gas flow, new flowmeters will be installed during the first long shutdown of the LHC. Because of the high level of radiation in the area next to the current leads, the flowmeters will be installed in shielded areas located up to 50 m away frommore » the current leads. The control valves being located next to the current leads, this configuration leads to long piping between the valves and the flowmeters. In order to determine its dynamic behaviour, the proposed system was simulated with a numerical model and validated with experimental measurements performed on a dedicated test bench.« less
MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…
Cosmic ray astroparticle physics: current status and future perspectives
NASA Astrophysics Data System (ADS)
Donato, Fiorenza
2017-02-01
The data we are receiving from galactic cosmic rays are reaching an unprecedented precision, over very wide energy ranges. Nevertheless, many problems are still open, while new ones seem to appear when data happen to be redundant. We will discuss some paths to possible progress in the theoretical modeling and experimental exploration of the galactic cosmic radiation.
Waves plus currents at a right angle: The rippled bed case
NASA Astrophysics Data System (ADS)
Faraci, C.; Foti, E.; Musumeci, R. E.
2008-07-01
The present paper deals with wave plus current flow over a fixed rippled bed. More precisely, modifications of the current profiles due to the superimposition of orthogonal cylindrical waves have been investigated experimentally. Since the experimental setup permitted only the wave dominated regime to be investigated (i.e., the regime where orbital velocity is larger than current velocity), also a numerical k-ɛ turbulence closure model has been developed in order to study a wider range of parameters, thus including the current dominated regime (i.e., where current velocity is larger than wave orbital one). In both cases a different response with respect to the flat bed case has been found. Indeed, in the flat bed case laminar wave boundary layers in a wave dominated regime induce a decrease in bottom shear stresses, while the presence of a rippled bed behaves as a macroroughness, which causes the wave boundary layer to become turbulent and therefore the current velocity near the bottom to be smaller than the one in the case of current only, with a consequent increase in the current bottom roughness.
Current and Future Research at DANCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jandel, M.; Baramsai, B.; Bredeweg, T. A.
2015-05-28
An overview of the current experimental program on measurements of neutron capture and neutron induced fission at the Detector for Advanced Neutron Capture Experiments (DANCE) is presented. Three major projects are currently under way: 1) high precision measurements of neutron capture cross sections on Uranium isotopes, 2) research aimed at studies of the short-lived actinide isomer production in neutron capture on 235U and 3) measurements of correlated data of fission observables. New projects include developments of auxiliary detectors to improve the capability of DANCE. We are building a compact, segmented NEUtron detector Array at DANCE (NEUANCE), which will be installedmore » in the central cavity of the DANCE array. It will thus provide experimental information on prompt fission neutrons in coincidence with the prompt fission gamma-rays measured by 160 BaF 2 crystals of DANCE. Additionally, unique correlated data will be obtained for neutron capture and neutron-induced fission using the DANCE-NEUANCE experimental set up in the future.« less
Precision measurements with LPCTrap at GANIL
NASA Astrophysics Data System (ADS)
Liénard, E.; Ban, G.; Couratin, C.; Delahaye, P.; Durand, D.; Fabian, X.; Fabre, B.; Fléchard, X.; Finlay, P.; Mauger, F.; Méry, A.; Naviliat-Cuncic, O.; Pons, B.; Porobic, T.; Quéméner, G.; Severijns, N.; Thomas, J. C.; Velten, Ph.
2015-11-01
The experimental achievements and the results obtained so far with the LPCTrap device installed at GANIL are presented. The apparatus is dedicated to the study of the weak interaction at low energy by means of precise measurements of the β - ν angular correlation parameter in nuclear β decays. So far, the data collected with three isotopes have enabled to determine, for the first time, the charge state distributions of the recoiling ions, induced by shakeoff process. The analysis is presently refined to deduce the correlation parameters, with the potential of improving both the constraint deduced at low energy on exotic tensor currents (6He1+) and the precision on the V u d element of the quark-mixing matrix (35Ar1+ and 19Ne1+) deduced from the mirror transitions dataset.
A thermodynamic and theoretical view for enzyme regulation.
Zhao, Qinyi
2015-01-01
Precise regulation is fundamental to the proper functioning of enzymes in a cell. Current opinions about this, such as allosteric regulation and dynamic contribution to enzyme regulation, are experimental models and substantially empirical. Here we proposed a theoretical and thermodynamic model of enzyme regulation. The main idea is that enzyme regulation is processed via the regulation of abundance of active conformation in the reaction buffer. The theoretical foundation, experimental evidence, and experimental criteria to test our model are discussed and reviewed. We conclude that basic principles of enzyme regulation are laws of protein thermodynamics and it can be analyzed using the concept of distribution curve of active conformations of enzymes.
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
Factors controlling precision and accuracy in isotope-ratio-monitoring mass spectrometry
NASA Technical Reports Server (NTRS)
Merritt, D. A.; Hayes, J. M.
1994-01-01
The performance of systems in which picomole quantities of sample are mixed with a carrier gas and passed through an isotope-ratio mass spectrometer system was examined experimentally and theoretically. Two different mass spectrometers were used, both having electron-impact ion sources and Faraday cup collector systems. One had an accelerating potential of 10kV and accepted 0.2 mL of He/min, producing, under those conditions, a maximum efficiency of 1 CO2 molecular ion collected per 700 molecules introduced. Comparable figures for the second instrument were 3 kV, 0.5 mL of He/min, and 14000 molecules/ion. Signal pathways were adjusted so that response times were <200 ms. Sample-related ion currents appeared as peaks with widths of 3-30 s. Isotope ratios were determined by comparison to signals produced by standard gases. In spite of rapid variations in signals, observed levels of performance were within a factor of 2 of shot-noise limits. For the 10-kV instrument, sample requirements for standard deviations of 0.1 and 0.5% were 45 and 1.7 pmol, respectively. Comparable requirements for the 3-kV instrument were 900 and 36 pmol. Drifts in instrumental characteristics were adequately neutralized when standards were observed at 20-min intervals. For the 10-kV instrument, computed isotopic compositions were independent of sample size and signal strength over the ranges examined. Nonlinearities of <0.04%/V were observed for the 3-kV system. Procedures for observation and subtraction of background ion currents were examined experimentally and theoretically. For sample/ background ratios varying from >10 to 0.3, precision is expected and observed to decrease approximately 2-fold and to depend only weakly on the precision with which background ion currents have been measured.
Neutrino Oscillations with the MINOS, MINOS+, T2K, and NOvA Experiments
Nakaya, Tsuyoshi; Plunkett, Robert K.
2016-01-18
Our paper discusses results and near-term prospects of the long-baseline neutrino experiments MINOS, MONOS+, T2K and NOvA. The non-zero value of the third neutrino mixing angle θ 13 allows experimental analysis in a manner which explicitly exhibits appearance and disappearance dependencies on additional parameters associated with mass-hierarchy, CP violation, and any non-maximal θ 23. Our current and near-future experiments begin the era of precision accelerator long-baseline measurements and lay the framework within which future experimental results will be interpreted.
Demonstrations of Magnetic Phenomena: Measuring the Air Permeability Using Tablets
ERIC Educational Resources Information Center
Lara, V. O. M.; Amaral, D. F.; Faria, D.; Vieira, L. P.
2014-01-01
We use a tablet to experimentally determine the dependencies of the magnetic field (B) on the electrical current and the axial distance from a coil (z). Our data shows good precision on the inverse cubic dependence of the magnetic field on the axial distance, B?z[superscript -3]. We obtain the value of air permeability µ[subscript air] with good…
The physics behind the larger scale organization of DNA in eukaryotes.
Emanuel, Marc; Radja, Nima Hamedani; Henriksson, Andreas; Schiessel, Helmut
2009-07-01
In this paper, we discuss in detail the organization of chromatin during a cell cycle at several levels. We show that current experimental data on large-scale chromatin organization have not yet reached the level of precision to allow for detailed modeling. We speculate in some detail about the possible physics underlying the larger scale chromatin organization.
Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eramo, R.; Bellini, M.; European Laboratory for Non-linear Spectroscopy
2011-04-15
Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.
Experimental considerations for testing antimatter antigravity using positronium 1S-2S spectroscopy
NASA Astrophysics Data System (ADS)
Crivelli, P.; Cooke, D. A.; Friedreich, S.
2014-05-01
In this contribution to the WAG 2013 workshop we report on the status of our measurement of the 1S-2S transition frequency of positronium. The aim of this experiment is to reach a precision of 0.5 ppb in order to cross check the QED calculations. After reviewing the current available sources of Ps, we consider laser cooling as a route to push the precision in the measurement down to 0.1 ppb. If such an uncertainty could be achieved, this would be sensitive to the gravitational redshift and therefore be able to assess the sign of gravity for antimatter.
Electrode Models for Electric Current Computed Tomography
CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.
2016-01-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280
Electrode models for electric current computed tomography.
Cheng, K S; Isaacson, D; Newell, J C; Gisser, D G
1989-09-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 omega.cm were studied. Values of "effective" contact impedance zeta used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 omega.cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an "effective" contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field.
Joint research effort on vibrations of twisted plates, phase 1: Final results
NASA Technical Reports Server (NTRS)
Kielb, R. E.; Leissa, A. W.; Macbain, J. C.; Carney, K. S.
1985-01-01
The complete theoretical and experimental results of the first phase of a joint government/industry/university research study on the vibration characteristics of twisted cantilever plates are given. The study is conducted to generate an experimental data base and to compare many different theoretical methods with each other and with the experimental results. Plates with aspect ratios, thickness ratios, and twist angles representative of current gas turbine engine blading are investigated. The theoretical results are generated by numerous finite element, shell, and beam analysis methods. The experimental results are obtained by precision matching a set of twisted plates and testing them at two laboratories. The second and final phase of the study will concern the effects of rotation.
Designing optimal stimuli to control neuronal spike timing
Packer, Adam M.; Yuste, Rafael; Paninski, Liam
2011-01-01
Recent advances in experimental stimulation methods have raised the following important computational question: how can we choose a stimulus that will drive a neuron to output a target spike train with optimal precision, given physiological constraints? Here we adopt an approach based on models that describe how a stimulating agent (such as an injected electrical current or a laser light interacting with caged neurotransmitters or photosensitive ion channels) affects the spiking activity of neurons. Based on these models, we solve the reverse problem of finding the best time-dependent modulation of the input, subject to hardware limitations as well as physiologically inspired safety measures, that causes the neuron to emit a spike train that with highest probability will be close to a target spike train. We adopt fast convex constrained optimization methods to solve this problem. Our methods can potentially be implemented in real time and may also be generalized to the case of many cells, suitable for neural prosthesis applications. With the use of biologically sensible parameters and constraints, our method finds stimulation patterns that generate very precise spike trains in simulated experiments. We also tested the intracellular current injection method on pyramidal cells in mouse cortical slices, quantifying the dependence of spiking reliability and timing precision on constraints imposed on the applied currents. PMID:21511704
CE-BLAST makes it possible to compute antigenic similarity for newly emerging pathogens.
Qiu, Tianyi; Yang, Yiyan; Qiu, Jingxuan; Huang, Yang; Xu, Tianlei; Xiao, Han; Wu, Dingfeng; Zhang, Qingchen; Zhou, Chen; Zhang, Xiaoyan; Tang, Kailin; Xu, Jianqing; Cao, Zhiwei
2018-05-02
Major challenges in vaccine development include rapidly selecting or designing immunogens for raising cross-protective immunity against different intra- or inter-subtypic pathogens, especially for the newly emerging varieties. Here we propose a computational method, Conformational Epitope (CE)-BLAST, for calculating the antigenic similarity among different pathogens with stable and high performance, which is independent of the prior binding-assay information, unlike the currently available models that heavily rely on the historical experimental data. Tool validation incorporates influenza-related experimental data sufficient for stability and reliability determination. Application to dengue-related data demonstrates high harmonization between the computed clusters and the experimental serological data, undetectable by classical grouping. CE-BLAST identifies the potential cross-reactive epitope between the recent zika pathogen and the dengue virus, precisely corroborated by experimental data. The high performance of the pathogens without the experimental binding data suggests the potential utility of CE-BLAST to rapidly design cross-protective vaccines or promptly determine the efficacy of the currently marketed vaccine against emerging pathogens, which are the critical factors for containing emerging disease outbreaks.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.; ...
2016-12-30
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Benschop, R; Draaisma, D
2000-01-01
A prominent feature of late nineteenth-century psychology was its intense preoccupation with precision. Precision was at once an ideal and an argument: the quest for precision helped psychology to establish its status as a mature science, sharing a characteristic concern with the natural sciences. We will analyse how psychologists set out to produce precision in 'mental chronometry', the measurement of the duration of psychological processes. In his Leipzig laboratory, Wundt inaugurated an elaborate research programme on mental chronometry. We will look at the problem of calibration of experimental apparatus and will describe the intricate material, literary, and social technologies involved in the manufacture of precision. First, we shall discuss some of the technical problems involved in the measurement of ever shorter time-spans. Next, the Cattell-Berger experiments will help us to argue against the received view that all the precision went into the hardware, and practically none into the social organization of experimentation. Experimenters made deliberate efforts to bring themselves and their subjects under a regime of control and calibration similar to that which reigned over the experimental machinery. In Leipzig psychology, the particular blend of material and social technology resulted in a specific object of study: the generalized mind. We will then show that the distribution of precision in experimental psychology outside Leipzig demanded a concerted effort of instruments, texts, and people. It will appear that the forceful attempts to produce precision and uniformity had some rather paradoxical consequences.
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Probing leptophilic dark sectors with hadronic processes
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2017-08-01
We study vector portal dark matter models where the mediator couples only to leptons. In spite of the lack of tree-level couplings to colored states, radiative effects generate interactions with quark fields that could give rise to a signal in current and future experiments. We identify such experimental signatures: scattering of nuclei in dark matter direct detection; resonant production of lepton-antilepton pairs at the Large Hadron Collider; and hadronic final states in dark matter indirect searches. Furthermore, radiative effects also generate an irreducible mass mixing between the vector mediator and the Z boson, severely bounded by ElectroWeak Precision Tests. We use current experimental results to put bounds on this class of models, accounting for both radiatively induced and tree-level processes. Remarkably, the former often overwhelm the latter.
Probing leptophilic dark sectors with hadronic processes
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2017-05-29
We study vector portal dark matter models where the mediator couples only to leptons. In spite of the lack of tree-level couplings to colored states, radiative effects generate interactions with quark fields that could give rise to a signal in current and future experiments. We identify such experimental signatures: scattering of nuclei in dark matter direct detection; resonant production of lepton–antilepton pairs at the Large Hadron Collider; and hadronic final states in dark matter indirect searches. Furthermore, radiative effects also generate an irreducible mass mixing between the vector mediator and the Z boson, severely bounded by ElectroWeak Precision Tests. Wemore » use current experimental results to put bounds on this class of models, accounting for both radiatively induced and tree-level processes. Remarkably, the former often overwhelm the latter.« less
Current and Future Research at DANCE
NASA Astrophysics Data System (ADS)
Jandel, M.; Baramsai, B.; Bredeweg, T. A.; Couture, A.; Hayes, A.; Kawano, T.; Mosby, S.; Rusev, G.; Stetcu, I.; Taddeucci, T. N.; Talou, P.; Ullmann, J. L.; Walker, C. L.; Wilhelmy, J. B.
2015-05-01
An overview of the current experimental program on measurements of neutron capture and neutron induced fission at the
Numerical and experimental analyses of lighting columns in terms of passive safety
NASA Astrophysics Data System (ADS)
Jedliński, Tomasz Ireneusz; Buśkiewicz, Jacek
2018-01-01
Modern lighting columns have a very beneficial influence on road safety. Currently, the columns are being designed to keep the driver safe in the event of a car collision. The following work compares experimental results of vehicle impact on a lighting column with FEM simulations performed using the Ansys LS-DYNA program. Due to high costs of experiments and time-consuming research process, the computer software seems to be very useful utility in the development of pole structures, which are to absorb kinetic energy of the vehicle in a precisely prescribed way.
Precise measurement of the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Rebecca J.; Thompson, Maxwell N.; Rassool, Roger P.
2011-08-15
State-of-the-art signal digitization and analysis techniques have been used to measure the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}. The half-life was determined to be 6347.8 {+-} 2.5 ms. This new datum contributes to the experimental testing of the conserved-vector-current hypothesis and the required unitarity of the Cabibbo-Kobayashi-Maskawa matrix: two essential components of the standard model. Detailed discussion of the experimental techniques and data analysis and a thorough investigation of the statistical and systematic uncertainties are presented.
Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof
2016-01-01
Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting.
Measuring the effective pixel positions for the HARPS3 CCD
NASA Astrophysics Data System (ADS)
Hall, Richard D.; Thompson, Samantha; Queloz, Didier
2016-07-01
We present preliminary results from an experiment designed to measure the effective pixel positions of a CCD to sub-pixel precision. This technique will be used to characterise the 4k x 4k CCD destined for the HARPS-3 spectrograph. The principle of coherent beam interference is used to create intensity fringes along one axis of the CCD. By sweeping the physical parameters of the experiment, the geometry of the fringes can be altered which is used to probe the pixel structure. We also present the limitations of the current experimental set-up and suggest what will be implemented in the future to vastly improve the precision of the measurements.
A minimalistic and optimized conveyor belt for neutral atoms.
Roy, Ritayan; Condylis, Paul C; Prakash, Vindhiya; Sahagun, Daniel; Hessmo, Björn
2017-10-20
Here we report of a design and the performance of an optimized micro-fabricated conveyor belt for precise and adiabatic transportation of cold atoms. A theoretical model is presented to determine optimal currents in conductors used for the transportation. We experimentally demonstrate a fast adiabatic transportation of Rubidium ( 87 Rb) cold atoms with minimal loss and heating with as few as three conveyor belt conductors. This novel design of a multilayered conveyor belt structure is fabricated in aluminium nitride (AlN) because of its outstanding thermal and electrical properties. This demonstration would pave a way for a compact and portable quantum device required for quantum information processing and sensors, where precise positioning of cold atoms is desirable.
Passi, Vikram; Gahoi, Amit; Senkovskiy, Boris V; Haberer, Danny; Fischer, Felix R; Grüneis, Alexander; Lemme, Max C
2018-03-28
We report on the experimental demonstration and electrical characterization of N = 7 armchair graphene nanoribbon (7-AGNR) field effect transistors. The back-gated transistors are fabricated from atomically precise and highly aligned 7-AGNRs, synthesized with a bottom-up approach. The large area transfer process holds the promise of scalable device fabrication with atomically precise nanoribbons. The channels of the FETs are approximately 30 times longer than the average nanoribbon length of 30 nm to 40 nm. The density of the GNRs is high, so that transport can be assumed well-above the percolation threshold. The long channel transistors exhibit a maximum I ON / I OFF current ratio of 87.5.
The Joint Physics Analysis Center: Recent results
NASA Astrophysics Data System (ADS)
Fernández-Ramírez, César
2016-10-01
We review some of the recent achievements of the Joint Physics Analysis Center, a theoretical collaboration with ties to experimental collaborations, that aims to provide amplitudes suitable for the analysis of the current and forthcoming experimental data on hadron physics. Since its foundation in 2013, the group is focused on hadron spectroscopy in preparation for the forthcoming high statistics and high precision experimental data from BELLEII, BESIII, CLAS12, COMPASS, GlueX, LHCb and (hopefully) PANDA collaborations. So far, we have developed amplitudes for πN scattering, KN scattering, pion and J/ψ photoproduction, two kaon photoproduction and three-body decays of light mesons (η, ω, ϕ). The codes for the amplitudes are available to download from the group web page and can be straightforwardly incorporated to the analysis of the experimental data.
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-01-01
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively. PMID:27222361
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-05-25
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.
An application framework for computer-aided patient positioning in radiation therapy.
Liebler, T; Hub, M; Sanner, C; Schlegel, W
2003-09-01
The importance of exact patient positioning in radiation therapy increases with the ongoing improvements in irradiation planning and treatment. Therefore, new ways to overcome precision limitations of current positioning methods in fractionated treatment have to be found. The Department of Medical Physics at the German Cancer Research Centre (DKFZ) follows different video-based approaches to increase repositioning precision. In this context, the modular software framework FIVE (Fast Integrated Video-based Environment) has been designed and implemented. It is both hardware- and platform-independent and supports merging position data by integrating various computer-aided patient positioning methods. A highly precise optical tracking system and several subtraction imaging techniques have been realized as modules to supply basic video-based repositioning techniques. This paper describes the common framework architecture, the main software modules and their interfaces. An object-oriented software engineering process has been applied using the UML, C + + and the Qt library. The significance of the current framework prototype for the application in patient positioning as well as the extension to further application areas will be discussed. Particularly in experimental research, where special system adjustments are often necessary, the open design of the software allows problem-oriented extensions and adaptations.
Dylla, Daniel P.; Megison, Susan D.
2015-01-01
Objective. We compared the precision of a search strategy designed specifically to retrieve randomized controlled trials (RCTs) and systematic reviews of RCTs with search strategies designed for broader purposes. Methods. We designed an experimental search strategy that automatically revised searches up to five times by using increasingly restrictive queries as long at least 50 citations were retrieved. We compared the ability of the experimental and alternative strategies to retrieve studies relevant to 312 test questions. The primary outcome, search precision, was defined for each strategy as the proportion of relevant, high quality citations among the first 50 citations retrieved. Results. The experimental strategy had the highest median precision (5.5%; interquartile range [IQR]: 0%–12%) followed by the narrow strategy of the PubMed Clinical Queries (4.0%; IQR: 0%–10%). The experimental strategy found the most high quality citations (median 2; IQR: 0–6) and was the strategy most likely to find at least one high quality citation (73% of searches; 95% confidence interval 68%–78%). All comparisons were statistically significant. Conclusions. The experimental strategy performed the best in all outcomes although all strategies had low precision. PMID:25922798
On the advantage of an external focus of attention: a benefit to learning or performance?
Lohse, Keith R; Sherwood, David E; Healy, Alice F
2014-02-01
Although there is general agreement in the sport science community that the focus of attention (FOA) has significant effects on performance, there is some debate about whether or not the FOA adopted during training affects learning. A large number of studies on the focus of attention have shown that subjects who train with an external FOA perform better on subsequent retention and transfer tests. However, the FOA in these studies was not experimentally controlled during testing. Therefore, the current study used a dart-throwing paradigm in which the FOA was experimentally manipulated at both acquisition and testing over very short and long training times. Performance at test, in terms of accuracy and precision, was improved by adopting an external focus at test regardless of the focus instructed during acquisition, in both Experiment 1 and 2. Although an effect of acquisition focus during testing in Experiment 2 provides some evidence that FOA affects learning, the current data demonstrate a much stronger effect for performance than learning, and stronger effects of attention on precision than accuracy. Theoretical implications of these results are discussed, but in general these data provide a more nuanced understanding of how attentional focus instructions influence motor learning and performance. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley
2004-01-01
This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.
Electron-neutrino charged-current quasi-elastic scattering in MINERvA
NASA Astrophysics Data System (ADS)
Wolcott, Jeremy
2014-03-01
The electron-neutrino charged-current quasi-elastic (CCQE) cross-section on nuclei is an important input parameter to appearance-type neutrino oscillation experiments. Current experiments typically work from the muon neutrino CCQE cross-section and apply corrections from theoretical arguments to obtain a prediction for the electron neutrino CCQE cross-section, but to date there has been no precise experimental verification of these estimates at an energy scale appropriate to such experiments. We present the current status of a direct measurement of the electron neutrino CCQE differential cross-section as a function of the squared four-momentum transfer to the nucleus, Q2, in MINERvA. This talk will discuss event selection, background constraints, and the flux prediction used in the calculation.
A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.
Shen, Lili; Guo, Jiming; Wang, Lei
2018-06-06
The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.
Experimental Guidance for Isospin Symmetry Breaking Calculations via Single Neutron Pickup Reactions
NASA Astrophysics Data System (ADS)
Leach, K. G.; Garrett, P. E.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Ball, G.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.; Towner, I. S.
2013-03-01
Recent activity in superallowed isospin-symmetry-breaking correction calculations has prompted interest in experimental confirmation of these calculation techniques. The shellmodel set of Towner and Hardy (2008) include the opening of specific core orbitals that were previously frozen. This has resulted in significant shifts in some of the δC values, and an improved agreement of the individual corrected {F}t values with the adopted world average of the 13 cases currently included in the high-precision evaluation of Vud. While the nucleus-to-nucleus variation of {F}t is consistent with the conserved-vector-current (CVC) hypothesis of the Standard Model, these new calculations must be thoroughly tested, and guidance must be given for their improvement. Presented here are details of a 64Zn(ěcd, t)63Zn experiment, undertaken to provide such guidance.
Experimental Estimation of Entanglement at the Quantum Limit
NASA Astrophysics Data System (ADS)
Brida, Giorgio; Degiovanni, Ivo Pietro; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander
2010-03-01
Entanglement is the central resource of quantum information processing and the precise characterization of entangled states is a crucial issue for the development of quantum technologies. This leads to the necessity of a precise, experimental feasible measure of entanglement. Nevertheless, such measurements are limited both from experimental uncertainties and intrinsic quantum bounds. Here we present an experiment where the amount of entanglement of a family of two-qubit mixed photon states is estimated with the ultimate precision allowed by quantum mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grange, Joseph M.
2013-01-01
This dissertation presents the first measurement of the muon antineutrino charged current quasi-elastic double-differential cross section. These data significantly extend the knowledge of neutrino and antineutrino interactions in the GeV range, a region that has recently come under scrutiny due to a number of conflicting experimental results. To maximize the precision of this measurement, three novel techniques were employed to measure the neutrino background component of the data set. Representing the first measurements of the neutrino contribution to an accelerator-based antineutrino beam in the absence of a magnetic field, the successful execution of these techniques carry implications for current andmore » future neutrino experiments.« less
Precision spectroscopy of the 2S-4P transition in atomic hydrogen
NASA Astrophysics Data System (ADS)
Maisenbacher, Lothar; Beyer, Axel; Matveev, Arthur; Grinin, Alexey; Pohl, Randolf; Khabarova, Ksenia; Kolachevsky, Nikolai; Hänsch, Theodor W.; Udem, Thomas
2017-04-01
Precision measurements of atomic hydrogen have long been successfully used to extract fundamental constants and to test bound-state QED. However, both these applications are limited by measurements of hydrogen lines other than the very precisely known 1S-2S transition. Moreover, the proton r.m.s.charge radius rp extracted from electronic hydrogen measurements currently disagrees by 4 σ with the much more precise value extracted from muonic hydrogen spectroscopy. We have measured the 2S-4P transition in atomic hydrogen using a cryogenic beam of hydrogen atoms optically excited to the initial 2S state. The first order Doppler shift of the one-photon 2S-4P transition is suppressed by actively stabilized counter-propagating laser beams and time-of-flight resolved detection. Quantum interference between excitation paths can lead to significant line distortions in our system. We use an experimentally verified, simple line shape model to take these distortions into account. With this, we can extract a new value for rp and the Rydberg constant R∞ with comparable accuracy as the combined previous H world data.
Quantum light in coupled interferometers for quantum gravity tests.
Ruo Berchera, I; Degiovanni, I P; Olivares, S; Genovese, M
2013-05-24
In recent years quantum correlations have received a lot of attention as a key ingredient in advanced quantum metrology protocols. In this Letter we show that they provide even larger advantages when considering multiple-interferometer setups. In particular, we demonstrate that the use of quantum correlated light beams in coupled interferometers leads to substantial advantages with respect to classical light, up to a noise-free scenario for the ideal lossless case. On the one hand, our results prompt the possibility of testing quantum gravity in experimental configurations affordable in current quantum optics laboratories and strongly improve the precision in "larger size experiments" such as the Fermilab holometer; on the other hand, they pave the way for future applications to high precision measurements and quantum metrology.
Identification of Tool Wear when Machining of Austenitic Steels and Titatium by Miniature Machining
NASA Astrophysics Data System (ADS)
Pilc, Jozef; Kameník, Roman; Varga, Daniel; Martinček, Juraj; Sadilek, Marek
2016-12-01
Application of miniature machining is currently rapidly increasing mainly in biomedical industry and machining of hard-to-machine materials. Machinability of materials with increased level of toughness depends on factors that are important in the final state of surface integrity. Because of this, it is necessary to achieve high precision (varying in microns) in miniature machining. If we want to guarantee machining high precision, it is necessary to analyse tool wear intensity in direct interaction with given machined materials. During long-term cutting process, different cutting wedge deformations occur, leading in most cases to a rapid wear and destruction of the cutting wedge. This article deal with experimental monitoring of tool wear intensity during miniature machining.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H; O'Donnell, Cian; Sejnowski, Terrence J; O'Leary, Timothy
2016-01-01
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’ (Doyle and Kiebler, 2011). Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimates of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. These findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons. DOI: http://dx.doi.org/10.7554/eLife.20556.001 PMID:28034367
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtukian-Nieto, T.; Collaboration: NEX Group of CENBG
2011-11-30
The experimental study of super-allowed nuclear {beta} decays serves as a sensitive probe of the conservation of the weak vector current (CVC) and allows tight limits to be set on the presence of scalar or right-handed currents. Once CVC is verified, it is possible to determine the V{sub ud} element of the CKM quark-mixing matrix. Similarly, the study of nuclear mirror {beta} decays allows to arrive at the same final quantity V{sub ud}. Whereas dedicated studies of 0{sup +}{yields}0{sup +} decays are performed for several decades now, the potential of mirror transitions was only rediscovered recently. Therefore, it can bemore » expected that important progress is possible with high-precision studies of different mirror {beta} decays. In the present piece of work the half-life measurements performed by the CENBG group of the proton-rich nuclei {sup 42}Ti, {sup 38-39}Ca, {sup 30-31}S and {sup 29}P are summarised.« less
Mazzone, P; Arena, P; Cantelli, L; Spampinato, G; Sposato, S; Cozzolino, S; Demarinis, P; Muscato, G
2016-07-01
The use of robotics in neurosurgery and, particularly, in stereotactic neurosurgery, is becoming more and more adopted because of the great advantages that it offers. Robotic manipulators easily allow to achieve great precision, reliability, and rapidity in the positioning of surgical instruments or devices in the brain. The aim of this work was to experimentally verify a fully automatic "no hands" surgical procedure. The integration of neuroimaging to data for planning the surgery, followed by application of new specific surgical tools, permitted the realization of a fully automated robotic implantation of leads in brain targets. An anthropomorphic commercial manipulator was utilized. In a preliminary phase, a software to plan surgery was developed, and the surgical tools were tested first during a simulation and then on a skull mock-up. In such a way, several tools were developed and tested, and the basis for an innovative surgical procedure arose. The final experimentation was carried out on anesthetized "large white" pigs. The determination of stereotactic parameters for the correct planning to reach the intended target was performed with the same technique currently employed in human stereotactic neurosurgery, and the robotic system revealed to be reliable and precise in reaching the target. The results of this work strengthen the possibility that a neurosurgeon may be substituted by a machine, and may represent the beginning of a new approach in the current clinical practice. Moreover, this possibility may have a great impact not only on stereotactic functional procedures but also on the entire domain of neurosurgery.
Calculating the metabolizable energy of macronutrients: a critical review of Atwater's results.
Sánchez-Peña, M Judith; Márquez-Sandoval, Fabiola; Ramírez-Anguiano, Ana C; Velasco-Ramírez, Sandra F; Macedo-Ojeda, Gabriela; González-Ortiz, Luis J
2017-01-01
The current values for metabolizable energy of macronutrients were proposed in 1910. Since then, however, efforts to revise these values have been practically absent, creating a crucial need to carry out a critical analysis of the experimental methodology and results that form the basis of these values. Presented here is an exhaustive analysis of Atwater's work on this topic, showing evidence of considerable weaknesses that compromise the validity of his results. These weaknesses include the following: (1) the doubtful representativeness of Atwater's subjects, their activity patterns, and their diets; (2) the extremely short duration of the experiments; (3) the uncertainty about which fecal and urinary excretions contain the residues of each ingested food; (4) the uncertainty about whether or not the required nitrogen balance in individuals was reached during experiments; (5) the numerous experiments carried out without valid preliminary experiments; (6) the imprecision affecting Atwater's experimental measurements; and (7) the numerous assumptions and approximations, along with the lack of information, characterizing Atwater's studies. This review presents specific guidelines for establishing new experimental procedures to estimate more precise and/or more accurate values for the metabolizable energy of macronutrients. The importance of estimating these values in light of their possible dependence on certain nutritional parameters and/or physical activity patterns of individuals is emphasized. The use of more precise values would allow better management of the current overweight and obesity epidemic. © The Author(s) 2016. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Closing in on the radiative weak chiral couplings
NASA Astrophysics Data System (ADS)
Cappiello, Luigi; Catà, Oscar; D'Ambrosio, Giancarlo
2018-03-01
We point out that, given the current experimental status of radiative kaon decays, a subclass of the O (p^4) counterterms of the weak chiral lagrangian can be determined in closed form. This involves in a decisive way the decay K^± → π ^± π ^0 l^+ l^-, currently being measured at CERN by the NA48/2 and NA62 collaborations. We show that consistency with other radiative kaon decay measurements leads to a rather clean prediction for the {O}(p^4) weak couplings entering this decay mode. This results in a characteristic pattern for the interference Dalitz plot, susceptible to be tested already with the limited statistics available at NA48/2. We also provide the first analysis of K_S→ π ^+π ^-γ ^*, which will be measured by LHCb and will help reduce (together with the related K_L decay) the experimental uncertainty on the radiative weak chiral couplings. A precise experimental determination of the {O}(p^4) weak couplings is important in order to assess the validity of the existing theoretical models in a conclusive way. We briefly comment on the current theoretical situation and discuss the merits of the different theoretical approaches.
Superallowed nuclear beta decay: Precision measurements for basic physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. C.
2012-11-20
For 60 years, superallowed 0{sup +}{yields}0{sup +} nuclear beta decay has been used to probe the weak interaction, currently verifying the conservation of the vector current (CVC) to high precision ({+-}0.01%) and anchoring the most demanding available test of the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix ({+-}0.06%), a fundamental pillar of the electroweak standard model. Each superallowed transition is characterized by its ft-value, a result obtained from three measured quantities: the total decay energy of the transition, its branching ratio, and the half-life of the parent state. Today's data set is composed of some 150 independent measurements of 13 separatemore » superallowed transitions covering a wide range of parent nuclei from {sup 10}C to {sup 74}Rb. Excellent consistency among the average results for all 13 transitions - a prediction of CVC - also confirms the validity of the small transition-dependent theoretical corrections that have been applied to account for isospin symmetry breaking. With CVC consistency established, the value of the vector coupling constant, G{sub V}, has been extracted from the data and used to determine the top left element of the CKM matrix, V{sub ud}. With this result the top-row unitarity test of the CKM matrix yields the value 0.99995(61), a result that sets a tight limit on possible new physics beyond the standard model. To have any impact on these fundamental weak-interaction tests, any measurement must be made with a precision of 0.1% or better - a substantial experimental challenge well beyond the requirements of most nuclear physics measurements. I overview the current state of the field and outline some of the requirements that need to be met by experimentalists if they aim to make measurements with this high level of precision.« less
Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy
NASA Astrophysics Data System (ADS)
Batanova, V. G.; Sobolev, A. V.; Magnin, V.
2018-01-01
Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample were found to be identical (within internal precision) to reference values, suggesting that achieved precision and accuracy are similar. The spatial resolution of EPMA in a silicate matrix, even at very extreme conditions (accelerating voltage 25 kV), does not exceed 7 - 8 μm and thus is still better than laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) or secondary ion mass spectrometry (SIMS) of similar precision. These make the electron microprobe an indispensable method with applications in experimental petrology, geochemistry and cosmochemistry.
Neutrino oscillations: The rise of the PMNS paradigm
NASA Astrophysics Data System (ADS)
Giganti, C.; Lavignac, S.; Zito, M.
2018-01-01
Since the discovery of neutrino oscillations, the experimental progress in the last two decades has been very fast, with the precision measurements of the neutrino squared-mass differences and of the mixing angles, including the last unknown mixing angle θ13. Today a very large set of oscillation results obtained with a variety of experimental configurations and techniques can be interpreted in the framework of three active massive neutrinos, whose mass and flavour eigenstates are related by a 3 × 3 unitary mixing matrix, the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, parametrized by three mixing angles θ12, θ23, θ13 and a CP-violating phase δCP. The additional parameters governing neutrino oscillations are the squared-mass differences Δ mji2 = mj2 - mi2, where mi is the mass of the ith neutrino mass eigenstate. This review covers the rise of the PMNS three-neutrino mixing paradigm and the current status of the experimental determination of its parameters. The next years will continue to see a rich program of experimental endeavour coming to fruition and addressing the three missing pieces of the puzzle, namely the determination of the octant and precise value of the mixing angle θ23, the unveiling of the neutrino mass ordering (whether m1
NASA Astrophysics Data System (ADS)
Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan
2018-05-01
A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.
Experimental Tests of Special Relativity
Roberts, Tom [Illinois Institute of Technology, Chicago, Illinois, United States
2017-12-09
Over the past century Special Relativity has become a cornerstone of modern physics, and its Lorentz invariance is a foundation of every current fundamental theory of physics. So it is crucial that it be thoroughly tested experimentally. The many tests of SR will be discussed, including several modern high-precision measurements. Several experiments that appear to be in conflict with SR will also be discussed, such as claims that the famous measurements of Michelson and Morley actually have a non-null result, and the similar but far more extensive measurements of Dayton Miller that 'determined the absolute motion of the earth'. But the errorbars for these old experiments are huge, and are larger than their purported signals. In short, SR has been tested extremely well and stands un-refuted today, but current thoughts about quantum gravity suggest that it might not truly be a symmetry of nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkes, Richard Jeffrey
The University of Washington (UW) HEP neutrino group performed experimental research on the physics of neutrinos, using the capabilities offered by the T2K Experiment and the Super-Kamiokande Neutrino Observatory. The UW group included senior investigator R. J. Wilkes, two PhD students, four MS degree students, and a research engineer, all of whom are members of the international scientific collaborations for T2K and Super-Kamiokande. During the period of support, within T2K we pursued new precision studies sensitive to new physics, going beyond the limits of current measurements of the fundamental neutrino oscillation parameters (mass differences and mixing angles). We began effortsmore » to measure (or significantly determine the absence of) 1 the CP-violating phase parameter δCP and determine the neutrino mass hierarchy. Using the Super-Kamiokande (SK) detector we pursued newly increased precision in measurement of neutrino oscillation parameters with atmospheric neutrinos, and extended the current reach in searches for proton decay, in addition to running the most sensitive supernova watch instrument [Scholberg 2012], performing other astrophysical neutrino studies, and analyzing beam-induced events from T2K. Overall, the research addressed central questions in the field of particle physics. It included the training of graduate students (both PhD and professional MS degree students), and postdoctoral researchers. Undergraduate students also participated as laboratory assistants.« less
Small animal radiotherapy research platforms
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Granton, Patrick; Tryggestad, Erik
2011-06-01
Advances in conformal radiation therapy and advancements in pre-clinical radiotherapy research have recently stimulated the development of precise micro-irradiators for small animals such as mice and rats. These devices are often kilovolt x-ray radiation sources combined with high-resolution CT imaging equipment for image guidance, as the latter allows precise and accurate beam positioning. This is similar to modern human radiotherapy practice. These devices are considered a major step forward compared to the current standard of animal experimentation in cancer radiobiology research. The availability of this novel equipment enables a wide variety of pre-clinical experiments on the synergy of radiation with other therapies, complex radiation schemes, sub-target boost studies, hypofractionated radiotherapy, contrast-enhanced radiotherapy and studies of relative biological effectiveness, to name just a few examples. In this review we discuss the required irradiation and imaging capabilities of small animal radiation research platforms. We describe the need for improved small animal radiotherapy research and highlight pioneering efforts, some of which led recently to commercially available prototypes. From this, it will be clear that much further development is still needed, on both the irradiation side and imaging side. We discuss at length the need for improved treatment planning tools for small animal platforms, and the current lack of a standard therein. Finally, we mention some recent experimental work using the early animal radiation research platforms, and the potential they offer for advancing radiobiology research.
Precise measurement of the angular correlation parameter aβν in the β decay of 35Ar with LPCTrap
NASA Astrophysics Data System (ADS)
Fabian, X.; Ban, G.; Boussaïd, R.; Breitenfeldt, M.; Couratin, C.; Delahaye, P.; Durand, D.; Finlay, P.; Fléchard, X.; Guillon, B.; Lemière, Y.; Leredde, A.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pierre, E.; Porobic, T.; Quéméner, G.; Rodríguez, D.; Severijns, N.; Thomas, J. C.; Van Gorp, S.
2014-03-01
Precise measurements in the β decay of the 35Ar nucleus enable to search for deviations from the Standard Model (SM) in the weak sector. These measurements enable either to check the CKM matrix unitarity or to constrain the existence of exotic currents rejected in the V-A theory of the SM. For this purpose, the β-ν angular correlation parameter, aβν, is inferred from a comparison between experimental and simulated recoil ion time-of-flight distributions following the quasi-pure Fermi transition of 35Ar1+ ions confined in the transparent Paul trap of the LPCTrap device at GANIL. During the last experiment, 1.5×106 good events have been collected, which corresponds to an expected precision of less than 0.5% on the aβν value. The required simulation is divided between the use of massive GPU parallelization and the GEANT4 toolkit for the source-cloud kinematics and the tracking of the decay products.
Resolving the neutron lifetime puzzle
NASA Astrophysics Data System (ADS)
Mumm, Pieter
2018-05-01
Free electrons and protons are stable, but outside atomic nuclei, free neutrons decay into a proton, electron, and antineutrino through the weak interaction, with a lifetime of ∼880 s (see the figure). The most precise measurements have stated uncertainties below 1 s (0.1%), but different techniques, although internally consistent, disagree by 4 standard deviations given the quoted uncertainties. Resolving this “neutron lifetime puzzle” has spawned much experimental effort as well as exotic theoretical mechanisms, thus far without a clear explanation. On page 627 of this issue, Pattie et al. (1) present the most precise measurement of the neutron lifetime to date. A new method of measuring trapped neutrons in situ allows a more detailed exploration of one of the more pernicious systematic effects in neutron traps, neutron phase-space evolution (the changing orbits of neutrons in the trap), than do previous methods. The precision achieved, combined with a very different set of systematic uncertainties, gives hope that experiments such as this one can help resolve the current situation with the neutron lifetime.
Wei, Fang; Lu, Bin; Wang, Jian; Xu, Dan; Pan, Zhengqing; Chen, Dijun; Cai, Haiwen; Qu, Ronghui
2015-02-23
A precision and broadband laser frequency swept technique is experimentally demonstrated. Using synchronous current compensation, a slave diode laser is dynamically injection-locked to a specific high-order modulation-sideband of a narrow-linewidth master laser modulated by an electro-optic modulator (EOM), whose driven radio frequency (RF) signal can be agilely, precisely controlled by a frequency synthesizer, and the high-order modulation-sideband enables multiplied sweep range and tuning rate. By using 5th order sideband injection-locking, the original tuning range of 3 GHz and tuning rate of 0.5 THz/s is multiplied by 5 times to 15 GHz and 2.5 THz/s respectively. The slave laser has a 3 dB-linewidth of 2.5 kHz which is the same to the master laser. The settling time response of a 10 MHz frequency switching is 2.5 µs. By using higher-order modulation-sideband and optimized experiment parameters, an extended sweep range and rate could be expected.
Workshop on Pion-Kaon Interactions (PKI2018) Mini-Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amaryan, M; Pal, Bilas
This volume is a short summary of talks given at the PKI2018 Workshop organized to discuss current status and future prospects of pi -K interactions. The precise data on pi K interaction will have a strong impact on strange meson spectroscopy and form factors that are important ingredients in the Dalitz plot analysis of a decays of heavy mesons as well as precision measurement of Vus matrix element and therefore on a test of unitarity in the first raw of the CKM matrix. The workshop has combined the efforts of experimentalists, Lattice QCD, and phenomenology communities. Experimental data relevant tomore » the topic of the workshop were presented from the broad range of different collaborations like CLAS, GlueX, COMPASS, BaBar, BELLE, BESIII, VEPP-2000, and LHCb. One of the main goals of this workshop was to outline a need for a new high intensity and high precision secondary KL beam facility at JLab produced with the 12 GeV electron beam of CEBAF accelerator.« less
Kramers-Kronig relations in Laser Intensity Modulation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuncer, Enis
2006-01-01
In this short paper, the Kramers-Kronig relations for the Laser Intensity Modulation Method (LIMM) are presented to check the self-consistency of experimentally obtained complex current densities. The numerical procedure yields well defined, precise estimates for the real and the imaginary parts of the LIMM current density calculated from its imaginary and real parts, respectively. The procedure also determines an accurate high frequency real current value which appears to be an intrinsic material parameter similar to that of the dielectric permittivity at optical frequencies. Note that the problem considered here couples two different material properties, thermal and electrical, consequently the validitymore » of the Kramers-Kronig relation indicates that the problem is invariant and linear.« less
Requirements Doc for Refurb of JASPER Facility in B131HB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knittel, Kenn M.
The Joint Actinide Shock Physics Experimental Research (JASPER) Program target fabrication facility is currently located in building 131 (B131) of the Lawrence Livermore National Laboratory (LLNL). A portion of this current facility has been committed to another program as part of a larger effort to consolidate LLNL capabilities into newer facilities. This facility assembles precision targets for scientific studies at the Nevada National Security Site (NNSS). B131 is also going through a modernization project to upgrade the infrastructure and abate asbestos. These activities will interrupt the continuous target fabrication efforts for the JASPER Program. Several options are explored to meetmore » the above conflicting requirements, with the final recommendation to prepare a new facility for JASPER target fabrication operations before modernization efforts begin in the current facility assigned to JASPER. This recommendation fits within all schedule constraints and minimizes the disruption to the JASPER Program. This option is not without risk, as it requires moving an aged, precision coordinate measuring machine, which is essential to the JASPER Program’s success. The selected option balances the risk to the machine with continuity of operations.« less
Leptonic-decay-constant ratio f(K+)/f(π+) from lattice QCD with physical light quarks.
Bazavov, A; Bernard, C; DeTar, C; Foley, J; Freeman, W; Gottlieb, Steven; Heller, U M; Hetrick, J E; Kim, J; Laiho, J; Levkova, L; Lightman, M; Osborn, J; Qiu, S; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-04-26
A calculation of the ratio of leptonic decay constants f(K+)/f(π+) makes possible a precise determination of the ratio of Cabibbo-Kobayashi-Maskawa (CKM) matrix elements |V(us)|/|V(ud)| in the standard model, and places a stringent constraint on the scale of new physics that would lead to deviations from unitarity in the first row of the CKM matrix. We compute f(K+)/f(π+) numerically in unquenched lattice QCD using gauge-field ensembles recently generated that include four flavors of dynamical quarks: up, down, strange, and charm. We analyze data at four lattice spacings a ≈ 0.06, 0.09, 0.12, and 0.15 fm with simulated pion masses down to the physical value 135 MeV. We obtain f(K+)/f(π+) = 1.1947(26)(37), where the errors are statistical and total systematic, respectively. This is our first physics result from our N(f) = 2+1+1 ensembles, and the first calculation of f(K+)/f(π+) from lattice-QCD simulations at the physical point. Our result is the most precise lattice-QCD determination of f(K+)/f(π+), with an error comparable to the current world average. When combined with experimental measurements of the leptonic branching fractions, it leads to a precise determination of |V(us)|/|V(ud)| = 0.2309(9)(4) where the errors are theoretical and experimental, respectively.
[Man-made vitreous fibers: current state of knowledge].
Chiappino, G
1999-01-01
Artificial vitreous fibres have been used as thermal insulation since the 1930's. Experimental studies on possible pathogenic, fibrogenic or carcinogenic effects did not produce any clear results until the 1970's, when Stanton demonstrated the carcinogenic effect of these and numerous other fibrous materials after direct inoculation in the pleural cavity. In subsequent years epidemiological and experimental studies multiplied: the epidemiological investigations did not show any evident pathogenic effects on very large cohorts of workers, and experimentally the carcinogenic effect was confirmed only by inoculation of high doses of fibres, while negative results were reported in inhalatory experiments. In view of the considerably long time that has elapsed since these materials were first used, the low biopersistence of the fibres and the now consolidated results of a large amount of reliable research, it is today possible to affirm that artificial vitreous fibres are not a hazard for the workers who produce and use them. Since current production in Europe involves mostly large diameter, non respirable fibres or fibres with extremely low biopersistence, in accordance with precise European Union recommendations, we may look to the future without undue concern.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Molecular Nanotechnology and Designs of Future
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Reviewing the status of current approaches and future projections, as already published in the scientific journals and books, the talk will summarize the direction in which computational and experimental molecular nanotechnologies are progressing. Examples of nanotechnological approach to the concepts of design and simulation of atomically precise materials in a variety of interdisciplinary areas will be presented. The concepts of hypothetical molecular machines and assemblers as explained in Drexler's and Merckle's already published work and Han et. al's WWW distributed molecular gears will be explained.
Towards a dispersive determination of the pion transition form factor
NASA Astrophysics Data System (ADS)
Leupold, Stefan; Hoferichter, Martin; Kubis, Bastian; Niecknig, Franz; Schneider, Sebastian P.
2018-01-01
We start with a brief motivation why the pion transition form factor is interesting and, in particular, how it is related to the high-precision standard-model calculation of the gyromagnetic ratio of the muon. Then we report on the current status of our ongoing project to calculate the pion transition form factor using dispersion theory. Finally we present and discuss a wish list of experimental data that would help to improve the input for our calculations and/or to cross-check our results.
Xie, Weizhen; Zhang, Weiwei
2017-11-01
The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.
Cheng, Yuhua; Chen, Kai; Bai, Libing; Yang, Jing
2014-02-01
Precise control of the grid-connected current is a challenge in photovoltaic inverter research. Traditional Proportional-Integral (PI) control technology cannot eliminate steady-state error when tracking the sinusoidal signal from the grid, which results in a very high total harmonic distortion in the grid-connected current. A novel PI controller has been developed in this paper, in which the sinusoidal wave is discretized into an N-step input signal that is decided by the control frequency to eliminate the steady state error of the system. The effect of periodical error caused by the dead zone of the power switch and conduction voltage drop can be avoided; the current tracking accuracy and current harmonic content can also be improved. Based on the proposed PI controller, a 700 W photovoltaic grid-connected inverter is developed and validated. The improvement has been demonstrated through experimental results.
OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN
An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalaydzhyan, Tigran
Within the general theory of relativity, the curvature of spacetime is related to the energy and momentum of the present matter and radiation. One of the more specific predictions of general relativity is the deflection of light and particle trajectories in the gravitational field of massive objects. Bending angles for electromagnetic waves and light in particular were measured with a high precision. However, the effect of gravity on relativistic massive particles was never studied experimentally. Here we propose and analyze experiments devoted to that purpose. We demonstrate a high sensitivity of the laser Compton scattering at high energy accelerators tomore » the effects of gravity. The main observable – maximal energy of the scattered photons – would experience a significant shift in the ambient gravitational field even for otherwise negligible violation of the equivalence principle. In conclusion, we confirm predictions of general relativity for ultrarelativistic electrons of energy of tens of GeV at a current level of resolution and expect our work to be a starting point of further high-precision studies on current and future accelerators, such as PETRA, European XFEL and ILC.« less
Testing general relativity on accelerators
Kalaydzhyan, Tigran
2015-09-07
Within the general theory of relativity, the curvature of spacetime is related to the energy and momentum of the present matter and radiation. One of the more specific predictions of general relativity is the deflection of light and particle trajectories in the gravitational field of massive objects. Bending angles for electromagnetic waves and light in particular were measured with a high precision. However, the effect of gravity on relativistic massive particles was never studied experimentally. Here we propose and analyze experiments devoted to that purpose. We demonstrate a high sensitivity of the laser Compton scattering at high energy accelerators tomore » the effects of gravity. The main observable – maximal energy of the scattered photons – would experience a significant shift in the ambient gravitational field even for otherwise negligible violation of the equivalence principle. In conclusion, we confirm predictions of general relativity for ultrarelativistic electrons of energy of tens of GeV at a current level of resolution and expect our work to be a starting point of further high-precision studies on current and future accelerators, such as PETRA, European XFEL and ILC.« less
A novel eddy current damper: theory and experiment
NASA Astrophysics Data System (ADS)
Ebrahimi, Babak; Khamesee, Mir Behrad; Golnaraghi, Farid
2009-04-01
A novel eddy current damper is developed and its damping characteristics are studied analytically and experimentally. The proposed eddy current damper consists of a conductor as an outer tube, and an array of axially magnetized ring-shaped permanent magnets separated by iron pole pieces as a mover. The relative movement of the magnets and the conductor causes the conductor to undergo motional eddy currents. Since the eddy currents produce a repulsive force that is proportional to the velocity of the conductor, the moving magnet and the conductor behave as a viscous damper. The eddy current generation causes the vibration to dissipate through the Joule heating generated in the conductor part. An accurate, analytical model of the system is obtained by applying electromagnetic theory to estimate the damping properties of the proposed eddy current damper. A prototype eddy current damper is fabricated, and experiments are carried out to verify the accuracy of the theoretical model. The experimental test bed consists of a one-degree-of-freedom vibration isolation system and is used for the frequency and transient time response analysis of the system. The eddy current damper model has a 0.1 m s-2 (4.8%) RMS error in the estimation of the mass acceleration. A damping coefficient as high as 53 Ns m-1 is achievable with the fabricated prototype. This novel eddy current damper is an oil-free, inexpensive damper that is applicable in various vibration isolation systems such as precision machinery, micro-mechanical suspension systems and structure vibration isolation.
Constraints and implications on Higgs FCNC couplings from precision measurement of Bs→μ+μ- decay
NASA Astrophysics Data System (ADS)
Chiang, Cheng-Wei; He, Xiao-Gang; Ye, Fang; Yuan, Xing-Bo
2017-08-01
We study constraints and implications of the recent LHCb measurement of B (Bs→μ+μ-) for tree-level Higgs-mediated flavor-changing neutral current (FCNC) interactions. Combined with experimental data on the Bs mass difference Δ ms and the h →μ τ and h →τ+τ- decay branching ratios from the LHC, we find that the Higgs FCNC couplings are severely constrained. The allowed regions for Bs→μ τ , τ τ and h →s b decays are obtained. Current data allow large C P violation in the h →τ+τ- decay. Consequences of the Cheng-Sher ansatz for the Higgs-Yukawa couplings are discussed in some detail.
A phenomenological study of photon production in low energy neutrino nucleon scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, James P; Goldman, Terry J
2009-01-01
Low energy photon production is an important background to many current and future precision neutrino experiments. We present a phenomenological study of t-channel radiative corrections to neutral current neutrino nucleus scattering. After introducing the relevant processes and phenomenological coupling constants, we will explore the derived energy and angular distributions as well as total cross-section predictions along with their estimated uncertainties. This is supplemented throughout with comments on possible experimental signatures and implications. We conclude with a general discussion of the analysis in the context of complimentary methodologies. This is based on a talk presented at the DPF 2009 meeting inmore » Detroit MI.« less
Guided mass spectrum labelling in atom probe tomography.
Haley, D; Choi, P; Raabe, D
2015-12-01
Atom probe tomography (APT) is a valuable near-atomic scale imaging technique, which yields mass spectrographic data. Experimental correctness can often pivot on the identification of peaks within a dataset, this is a manual process where subjectivity and errors can arise. The limitations of manual procedures complicate APT experiments for the operator and furthermore are a barrier to technique standardisation. In this work we explore the capabilities of computer-guided ranging to aid identification and analysis of mass spectra. We propose a fully robust algorithm for enumeration of the possible identities of detected peak positions, which assists labelling. Furthermore, a simple ranking scheme is developed to allow for evaluation of the likelihood of each possible identity being the likely assignment from the enumerated set. We demonstrate a simple, yet complete work-chain that allows for the conversion of mass-spectra to fully identified APT spectra, with the goal of minimising identification errors, and the inter-operator variance within APT experiments. This work chain is compared to current procedures via experimental trials with different APT operators, to determine the relative effectiveness and precision of the two approaches. It is found that there is little loss of precision (and occasionally gain) when participants are given computer assistance. We find that in either case, inter-operator precision for ranging varies between 0 and 2 "significant figures" (2σ confidence in the first n digits of the reported value) when reporting compositions. Intra-operator precision is weakly tested and found to vary between 1 and 3 significant figures, depending upon species composition levels. Finally it is suggested that inconsistencies in inter-operator peak labelling may be the largest source of scatter when reporting composition data in APT. Copyright © 2015 Elsevier B.V. All rights reserved.
Soloperto, Alessandro; Palazzolo, Gemma; Tsushima, Hanako; Chieregatti, Evelina; Vassalli, Massimo; Difato, Francesco
2016-01-01
Current optical approaches are progressing far beyond the scope of monitoring the structure and function of living matter, and they are becoming widely recognized as extremely precise, minimally-invasive, contact-free handling tools. Laser manipulation of living tissues, single cells, or even single-molecules is becoming a well-established methodology, thus founding the onset of new experimental paradigms and research fields. Indeed, a tightly focused pulsed laser source permits complex tasks such as developing engineered bioscaffolds, applying calibrated forces, transfecting, stimulating, or even ablating single cells with subcellular precision, and operating intracellular surgical protocols at the level of single organelles. In the present review, we report the state of the art of laser manipulation in neuroscience, to inspire future applications of light-assisted tools in nano-neurosurgery.
The New Kilogram Definition and its Implications for High-Precision Mass Tolerance Classes.
Abbott, Patrick J; Kubarych, Zeina J
2013-01-01
The SI unit of mass, the kilogram, is the only remaining artifact definition in the seven fundamental units of the SI system. It will be redefined in terms of the Planck constant as soon as certain experimental conditions, based on recommendations of the Consultative Committee for Mass and Related Quantities (CCM) are met. To better reflect reality, the redefinition will likely be accompanied by an increase in the uncertainties that National Metrology Institutes (NMIs) pass on to customers via artifact dissemination, which could have an impact on the reference standards that are used by secondary calibration laboratories if certain weight tolerances are adopted for use. This paper will compare the legal metrology requirements for precision mass calibration laboratories after the kilogram is redefined with the current capabilities based on the international prototype kilogram (IPK) realization of the kilogram.
Soloperto, Alessandro; Palazzolo, Gemma; Tsushima, Hanako; Chieregatti, Evelina; Vassalli, Massimo; Difato, Francesco
2016-01-01
Current optical approaches are progressing far beyond the scope of monitoring the structure and function of living matter, and they are becoming widely recognized as extremely precise, minimally-invasive, contact-free handling tools. Laser manipulation of living tissues, single cells, or even single-molecules is becoming a well-established methodology, thus founding the onset of new experimental paradigms and research fields. Indeed, a tightly focused pulsed laser source permits complex tasks such as developing engineered bioscaffolds, applying calibrated forces, transfecting, stimulating, or even ablating single cells with subcellular precision, and operating intracellular surgical protocols at the level of single organelles. In the present review, we report the state of the art of laser manipulation in neuroscience, to inspire future applications of light-assisted tools in nano-neurosurgery. PMID:27013962
High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB
Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven
2013-01-01
Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363
An extended set of yeast-based functional assays accurately identifies human disease mutations
Sun, Song; Yang, Fan; Tan, Guihong; Costanzo, Michael; Oughtred, Rose; Hirschman, Jodi; Theesfeld, Chandra L.; Bansal, Pritpal; Sahni, Nidhi; Yi, Song; Yu, Analyn; Tyagi, Tanya; Tie, Cathy; Hill, David E.; Vidal, Marc; Andrews, Brenda J.; Boone, Charles; Dolinski, Kara; Roth, Frederick P.
2016-01-01
We can now routinely identify coding variants within individual human genomes. A pressing challenge is to determine which variants disrupt the function of disease-associated genes. Both experimental and computational methods exist to predict pathogenicity of human genetic variation. However, a systematic performance comparison between them has been lacking. Therefore, we developed and exploited a panel of 26 yeast-based functional complementation assays to measure the impact of 179 variants (101 disease- and 78 non-disease-associated variants) from 22 human disease genes. Using the resulting reference standard, we show that experimental functional assays in a 1-billion-year diverged model organism can identify pathogenic alleles with significantly higher precision and specificity than current computational methods. PMID:26975778
Active-passive hybrid piezoelectric actuators for high-precision hard disk drive servo systems
NASA Astrophysics Data System (ADS)
Chan, Kwong Wah; Liao, Wei-Hsin
2006-03-01
Positioning precision is crucial to today's increasingly high-speed, high-capacity, high data density, and miniaturized hard disk drives (HDDs). The demand for higher bandwidth servo systems that can quickly and precisely position the read/write head on a high track density becomes more pressing. Recently, the idea of applying dual-stage actuators to track servo systems has been studied. The push-pull piezoelectric actuated devices have been developed as micro actuators for fine and fast positioning, while the voice coil motor functions as a large but coarse seeking. However, the current dual-stage actuator design uses piezoelectric patches only without passive damping. In this paper, we propose a dual-stage servo system using enhanced active-passive hybrid piezoelectric actuators. The proposed actuators will improve the existing dual-stage actuators for higher precision and shock resistance, due to the incorporation of passive damping in the design. We aim to develop this hybrid servo system not only to increase speed of track seeking but also to improve precision of track following servos in HDDs. New piezoelectrically actuated suspensions with passive damping have been designed and fabricated. In order to evaluate positioning and track following performances for the dual-stage track servo systems, experimental efforts are carried out to implement the synthesized active-passive suspension structure with enhanced piezoelectric actuators using a composite nonlinear feedback controller.
Multilayer Piezoelectric Stack Actuator Characterization
NASA Technical Reports Server (NTRS)
Sherrit, Stewart; Jones, Christopher M.; Aldrich, Jack B.; Blodget, Chad; Bao, Xioaqi; Badescu, Mircea; Bar-Cohen, Yoseph
2008-01-01
Future NASA missions are increasingly seeking to use actuators for precision positioning to accuracies of the order of fractions of a nanometer. For this purpose, multilayer piezoelectric stacks are being considered as actuators for driving these precision mechanisms. In this study, sets of commercial PZT stacks were tested in various AC and DC conditions at both nominal and extreme temperatures and voltages. AC signal testing included impedance, capacitance and dielectric loss factor of each actuator as a function of the small-signal driving sinusoidal frequency, and the ambient temperature. DC signal testing includes leakage current and displacement as a function of the applied DC voltage. The applied DC voltage was increased to over eight times the manufacturers' specifications to investigate the correlation between leakage current and breakdown voltage. Resonance characterization as a function of temperature was done over a temperature range of -180C to +200C which generally exceeded the manufacturers' specifications. In order to study the lifetime performance of these stacks, five actuators from one manufacturer were driven by a 60volt, 2 kHz sine-wave for ten billion cycles. The tests were performed using a Lab-View controlled automated data acquisition system that monitored the waveform of the stack electrical current and voltage. The measurements included the displacement, impedance, capacitance and leakage current and the analysis of the experimental results will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions
Kaur, Parminder; O’Connor, Peter B.
2008-01-01
Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354
On the precision of experimentally determined protein folding rates and φ-values
De Los Rios, Miguel A.; Muralidhara, B.K.; Wildes, David; Sosnick, Tobin R.; Marqusee, Susan; Wittung-Stafshede, Pernilla; Plaxco, Kevin W.; Ruczinski, Ingo
2006-01-01
φ-Values, a relatively direct probe of transition-state structure, are an important benchmark in both experimental and theoretical studies of protein folding. Recently, however, significant controversy has emerged regarding the reliability with which φ-values can be determined experimentally: Because φ is a ratio of differences between experimental observables it is extremely sensitive to errors in those observations when the differences are small. Here we address this issue directly by performing blind, replicate measurements in three laboratories. By monitoring within- and between-laboratory variability, we have determined the precision with which folding rates and φ-values are measured using generally accepted laboratory practices and under conditions typical of our laboratories. We find that, unless the change in free energy associated with the probing mutation is quite large, the precision of φ-values is relatively poor when determined using rates extrapolated to the absence of denaturant. In contrast, when we employ rates estimated at nonzero denaturant concentrations or assume that the slopes of the chevron arms (mf and mu) are invariant upon mutation, the precision of our estimates of φ is significantly improved. Nevertheless, the reproducibility we thus obtain still compares poorly with the confidence intervals typically reported in the literature. This discrepancy appears to arise due to differences in how precision is calculated, the dependence of precision on the number of data points employed in defining a chevron, and interlaboratory sources of variability that may have been largely ignored in the prior literature. PMID:16501226
NASA Astrophysics Data System (ADS)
Yuanyuan, Zhang
The stochastic branching model of multi-particle productions in high energy collision has theoretical basis in perturbative QCD, and also successfully describes the experimental data for a wide energy range. However, over the years, little attention has been put on the branching model for supersymmetric (SUSY) particles. In this thesis, a stochastic branching model has been built to describe the pure supersymmetric particle jets evolution. This model is a modified two-phase stochastic branching process, or more precisely a two phase Simple Birth Process plus Poisson Process. The general case that the jets contain both ordinary particle jets and supersymmetric particle jets has also been investigated. We get the multiplicity distribution of the general case, which contains a Hypergeometric function in its expression. We apply this new multiplicity distribution to the current experimental data of pp collision at center of mass energy √s = 0.9, 2.36, 7 TeV. The fitting shows the supersymmetric particles haven't participate branching at current collision energy.
NASA Astrophysics Data System (ADS)
Rocha, João Vicente; Camerini, Cesar; Pereira, Gabriela
2016-02-01
The 2015 World Federation of NDE Centers (WFNDEC) eddy current benchmark problem involves the inspection of two EDM notches placed at the edge of a conducting plate with a pancake coil that runs parallel to the plate's edge line. Experimental data consists of impedance variation measured with a precision LCR bridge as a XY scanner moves the coil. The authors are pleased to present the numerical results obtained with commercial FEM packages (OPERA 3-D). Values of electrical resistance and inductive reactance variation between base material and the region around the notch are plotted as function of the coil displacement over the plate. The calculations were made for frequencies of 1 kHz and 10 kHz and agreement between experimental and numerical results are excellent for all inspection conditions. Explanations are made about how the impedance is calculated as well as pros and cons of the presented methods.
High-precision x-ray spectroscopy of highly charged ions with microcalorimeters
NASA Astrophysics Data System (ADS)
Kraft-Bermuth, S.; Andrianov, V.; Bleile, A.; Echler, A.; Egelhof, P.; Grabitz, P.; Ilieva, S.; Kilbourne, C.; Kiselev, O.; McCammon, D.; Meier, J.
2013-09-01
The precise determination of the energy of the Lyman α1 and α2 lines in hydrogen-like heavy ions provides a sensitive test of quantum electrodynamics in very strong Coulomb fields. To improve the experimental precision, the new detector concept of microcalorimeters is now exploited for such measurements. Such detectors consist of compensated-doped silicon thermistors and Pb or Sn absorbers to obtain high quantum efficiency in the energy range of 40-70 keV, where the Doppler-shifted Lyman lines are located. For the first time, a microcalorimeter was applied in an experiment to precisely determine the transition energy of the Lyman lines of lead ions at the experimental storage ring at GSI. The energy of the Ly α1 line E(Ly-α1, 207Pb81+) = (77937 ± 12stat ± 25syst) eV agrees within error bars with theoretical predictions. To improve the experimental precision, a new detector array with more pixels and better energy resolution was equipped and successfully applied in an experiment to determine the Lyman-α lines of gold ions 197Au78+.
Field Balancing of Magnetically Levitated Rotors without Trial Weights
Fang, Jiancheng; Wang, Yingguang; Han, Bangcheng; Zheng, Shiqiang
2013-01-01
Unbalance in magnetically levitated rotor (MLR) can cause undesirable synchronous vibrations and lead to the saturation of the magnetic actuator. Dynamic balancing is an important way to solve these problems. However, the traditional balancing methods, using rotor displacement to estimate a rotor's unbalance, requiring several trial-runs, are neither precise nor efficient. This paper presents a new balancing method for an MLR without trial weights. In this method, the rotor is forced to rotate around its geometric axis. The coil currents of magnetic bearing, rather than rotor displacement, are employed to calculate the correction masses. This method provides two benefits when the MLR's rotation axis coincides with the geometric axis: one is that unbalanced centrifugal force/torque equals the synchronous magnetic force/torque, and the other is that the magnetic force is proportional to the control current. These make calculation of the correction masses by measuring coil current with only a single start-up precise. An unbalance compensation control (UCC) method, using a general band-pass filter (GPF) to make the MLR spin around its geometric axis is also discussed. Experimental results show that the novel balancing method can remove more than 92.7% of the rotor unbalance and a balancing accuracy of 0.024 g mm kg−1 is achieved.
Scanning tunneling microscopy of atomically precise graphene nanoribbons exfoliated onto H:Si(100)
NASA Astrophysics Data System (ADS)
Radocea, Adrian; Mehdi Pour, Mohammad; Vo, Timothy; Shekhirev, Mikhail; Sinitskii, Alexander; Lyding, Joseph
Atomically precise graphene nanoribbons (GNRs) are promising materials for next generation transistors due to their well-controlled bandgaps and the high thermal conductivity of graphene. The solution synthesis of graphene nanoribbons offers a pathway towards scalable manufacturing. While scanning tunneling microscopy (STM) can access size scales required for characterization, solvent residue increases experimental difficulty and precludes band-gap determination via scanning tunneling spectroscopy (STS). Our work addresses this challenge through a dry contact transfer method that cleanly transfers solution-synthesized GNRs onto H:Si(100) under UHV using a fiberglass applicator. The semiconducting silicon surface avoids problems with image charge screening enabling intrinsic bandgap measurements. We characterize the nanoribbons using STM and STS. For chevron GNRs, we find a 1.6 eV bandgap, in agreement with computational modeling, and map the electronic structure spatially with detailed spectra lines and current imaging tunneling spectroscopy. Mapping the electronic structure of graphene nanoribbons is an important step towards taking advantage of the ability to form atomically precise nanoribbons and finely tune their properties.
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
High-precision and low-cost vibration generator for low-frequency calibration system
NASA Astrophysics Data System (ADS)
Li, Rui-Jun; Lei, Ying-Jun; Zhang, Lian-Sheng; Chang, Zhen-Xin; Fan, Kuang-Chao; Cheng, Zhen-Ying; Hu, Peng-Hao
2018-03-01
Low-frequency vibration is one of the harmful factors that affect the accuracy of micro-/nano-measuring machines because its amplitude is significantly small and it is very difficult to avoid. In this paper, a low-cost and high-precision vibration generator was developed to calibrate an optical accelerometer, which is self-designed to detect low-frequency vibration. A piezoelectric actuator is used as vibration exciter, a leaf spring made of beryllium copper is used as an elastic component, and a high-resolution, low-thermal-drift eddy current sensor is applied to investigate the vibrator’s performance. Experimental results demonstrate that the vibration generator can achieve steady output displacement with frequency range from 0.6 Hz to 50 Hz, an analytical displacement resolution of 3.1 nm and an acceleration range from 3.72 mm s-2 to 1935.41 mm s-2 with a relative standard deviation less than 1.79%. The effectiveness of the high-precision and low-cost vibration generator was verified by calibrating our optical accelerometer.
Flexible coordinate measurement system based on robot for industries
NASA Astrophysics Data System (ADS)
Guo, Yin; Yang, Xue-you; Liu, Chang-jie; Ye, Sheng-hua
2010-10-01
The flexible coordinate measurement system based on robot which is applicable to multi-model vehicle is designed to meet the needs of online measurement for current mainstream mixed body-in-white(BIW) production line. The moderate precision, good flexibility and no blind angle are the benefits of this measurement system. According to the measurement system, a monocular structured light vision sensor has been designed, which can measure not only edges, but also planes, apertures and other features. And a effective way to fast on-site calibration of the whole system using the laser tracker has also been proposed, which achieves the unity of various coordinate systems in industrial fields. The experimental results show satisfactory precision of +/-0.30mm of this measurement system, which is sufficient for the needs of online measurement for body-in-white(BIW) in the auto production line. The system achieves real-time detection and monitoring of the whole process of the car body's manufacture, and provides a complete data support in purpose of overcoming the manufacturing error immediately and accurately and improving the manufacturing precision.
Precision Voltage Referencing Techniques in MOS Technology.
NASA Astrophysics Data System (ADS)
Song, Bang-Sup
With the increasing complexity of functions on a single MOS chip, precision analog cicuits implemented in the same technology are in great demand so as to be integrated together with digital circuits. The future development of MOS data acquisition systems will require precision on-chip MOS voltage references. This dissertation will probe two most promising configurations of on-chip voltage references both in NMOS and CMOS technologies. In NMOS, an ion-implantation effect on the temperature behavior of MOS devices is investigated to identify the fundamental limiting factors of a threshold voltage difference as an NMOS voltage source. For this kind of voltage reference, the temperature stability on the order of 20ppm/(DEGREES)C is achievable with a shallow single-threshold implant and a low-current, high-body bias operation. In CMOS, a monolithic prototype bandgap reference is designed, fabricated and tested which embodies a curvature compensation and exhibits a minimized sensitivity to the process parameter variation. Experimental results imply that an average temperature stability on the order of 10ppm/(DEGREES)C with a production spread of less than 10ppm/(DEGREES)C feasible over the commercial temperature range.
Micromachined probes for laboratory plasmas
NASA Astrophysics Data System (ADS)
Chiang, Franklin Changta
As we begin to find more applications for plasmas in our everyday lives, the ability to characterize and understand their inner workings becomes increasingly important. Much of our current understanding of plasma physics comes from investigations conducted in diffuse, outer space plasmas where experimenters have no control over the environment or experimental conditions and one measures interesting phenomena only by chance when the spacecraft or satellite passes through them. Ideally, experiments should be performed in a controlled environment, where plasma events can be deliberately and reliably created when wanted and probes placed precisely within the plasma. Unfortunately, often due to their size, probes used in outer space are unsuitable for use in high-density laboratory plasmas, and constructing probes that can be used in terrestrial plasmas is a considerable challenge. This dissertation presents the development, implementation, and experimental results of three micromachined probes capable of measuring voltage and electric field, ion energies, and changing magnetic fields (B-dot) in laboratory plasmas.
A precise few-nucleon size difference by isotope shift measurements of helium
NASA Astrophysics Data System (ADS)
Rezaeian, Nima Hassan
We perform high precision measurements of an isotope shift between the two stable isotopes of helium. We use laser excitation of the 23 S1 -- 23P0 transition at 1083 .... in a metastable beam of 3He and 4He atoms. A newly developed tunable laser frequency selector along with our previous electro-optic frequency modulation technique provides extremely reliable, adaptable, and precise frequency and intensity control. The intensity control contributes negligibly to overall experimental uncertainty by selecting (t selection < 50 ) and stabilizing the intensity of the required sideband and eliminating (˜10-5) the unwanted frequencies generated during the modulation of 1083 nm laser carrier frequency. The selection technique uses a MEMS based fiber switch (tswitch ≈ 10 ms) and several temperature stabilized narrow band (˜3 GHz) fiber gratings. A fiber based optical circulator and an inline fiber amplifier provide the desired isolation and the net gain for the selected frequency. Also rapid (˜2 sec.) alternating measurements of the 23 S1 -- 23P0 interval for both species of helium is achieved with a custom fiber laser for simultaneous optical pumping. A servo-controlled retro-reflected laser beam eliminates residual Doppler effects during the isotope shift measurement. An improved detection design and software control makes negligible subtle potential biases in the data collection. With these advances, combined with new internal and external consistency checks, we are able to obtain results consistent with the best previous measurements, but with substantially improved precision. Our measurement of the 23S 1 -- 23P0 isotope shift between 3He and 4He is 31 097 535.2 (5)kHz. The most recent theoretic calculation combined with this measuremen. yields a new determination for nuclear size differences between 3He and 4He: Deltarc = 0.292 6 (1)exp (8)th(52)expfm, with a precision of less than a part in 104 coming from the experimental uncertainty (first parenthesis), and a part in 10 3 coming from theory. This value is consistent with electron scattering measurement, but a factor of 10 more precise. It is inconsistent (4 sigma) with a recent isotope shift measurement on another helium transition (2 1S0 -- 23 S1). Comparisons with ongoing muonic helium measurements may provide clues to the origin of what is currently called the proton puzzle: electronic and muonic measurements of the proton size do not agree. In the future, the experimental improvements described here can be used for higher precision tests of atomic theory and quantum electrodynamics, as well as an important atomic physics source of the fine structure constant.
Progress on the Ohio State University Get Away Special G-0318: DEAP
NASA Technical Reports Server (NTRS)
Sarigul, Nesrin; Mortensen, A. J.
1987-01-01
The Get Away Special program became a major presence at the Ohio State University with the award of GAS-0318 by the American Institute of Aeronautics and Astronautics. There are some twenty engineering researchers and students currently working on the project. GAS-0318 payload is an experimental manufacturing process known as Directional Electrostatic Accretion Process (DEAP). This high precision portable microgravity manufacturing method will revolutionize the manufacture and repair of spacecraft and space structures. The cost effectiveness of this process will be invaluable to future space development and exploration.
Inductive detection of the free surface of liquid metals
NASA Astrophysics Data System (ADS)
Zürner, Till; Ratajczak, Matthias; Wondrak, Thomas; Eckert, Sven
2017-11-01
A novel measurement system to determine the surface position and topology of liquid metals is presented. It is based on the induction of eddy currents by a time-harmonic magnetic field and the subsequent measurement of the resulting secondary magnetic field using gradiometric induction coils. The system is validated experimentally for static and dynamic surfaces of the low-melting liquid metal alloy gallium-indium-tin in a narrow vessel. It is shown that a precision below 1 mm and a time resolution of at least 20 Hz can be achieved.
How to Monitor the Breathing of Laboratory Rodents: A Review of the Current Methods.
Grimaud, Julien; Murthy, Venkatesh N
2018-05-23
Accurately measuring respiration in laboratory rodents is essential for many fields of research, including olfactory neuroscience, social behavior, learning and memory, and respiratory physiology. However, choosing the right technique to monitor respiration can be tricky, given the many criteria to take into account: reliability, precision, and invasiveness, to name a few. This review aims to assist experimenters in choosing the technique that will best fit their needs, by surveying the available tools, discussing their strengths and weaknesses, and offering suggestions for future improvements.
Precise positioning method for multi-process connecting based on binocular vision
NASA Astrophysics Data System (ADS)
Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan
2016-01-01
With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.
Fabrication of brittle materials -- current status
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scattergood, R.O.
The research initiatives in the area of precision fabrication will be continued in the upcoming year. Three students, T. Bifano (PhD), P. Blake (PhD) and E. Smith (MS), finished their research programs in the last year. Sections 13 and 14 will summarize the essential results from the work of the Materials Engineering students Blake and Smith. Further details will be presented in forthcoming publications that are now in preparation. The results from Bifano`s thesis have been published in adequate detail and need not be summarized further. Three new students, S. Blackley (MS), H. Paul (PhD), and S. Smith (PhD) havemore » joined the program and will continue the research efforts in precision fabrication. The programs for these students will be outlined in Sections 15 and 16. Because of the success of the earlier work in establishing new process models and experimental techniques for the study of diamond turning and diamond grinding, the new programs will, in part, build upon the earlier work. This is especially true for investigations concerned with brittle materials. The basic understanding of material response of nominally brittle materials during machining or grinding operations remains as a challenge. The precision fabrication of brittle materials will continue as an area of emphasis for the Precision Engineering Center.« less
NASA Astrophysics Data System (ADS)
Sang, Xiahan
Intermetallics offer unique property combinations often superior to those of more conventional solid solution alloys of identical composition. Understanding of bonding in intermetallics would greatly accelerate development of intermetallics for advanced and high performance engineering applications. Tetragonal intermetallics L10 ordered TiAl, FePd and FePt are used as model systems to experimentally measure their electron densities using quantitative convergent beam electron diffraction (QCBED) method and then compare details of the 3d-4d (FePd) and 3d-5d (FePt) electron interactions to elucidate their role on properties of the respective ferromagnetic L10-ordered intermetallics FePd and FePt. A new multi-beam off-zone axis condition QCBED method has been developed to increase sensitivity of CBED patterns to change of structure factors and the anisotropic Debye-Waller (DW) factors. Unprecedented accuracy and precision in structure and DW factor measurements has been achieved by acquiring CBED patterns using beam-sample geometry that ensures strong dynamical interaction between the fast electrons and the periodic potential in the crystalline samples. This experimental method has been successfully applied to diamond cubic Si, and chemically ordered B2 cubic NiAl, tetragonal L10 ordered TiAl and FePd. The accurate and precise experimental DW and structure factors for L10 TiAl and FePd allow direct evaluation of computer calculations using the current state of the art density functional theory (DFT) based electron structure modeling. The experimental electron density difference map of L1 0 TiAl shows that the DFT calculations describe bonding to a sufficient accuracy for s- and p- electrons interaction, e. g., the Al-layer. However, it indicate significant quantitative differences to the experimental measurements for the 3d-3d interactions of the Ti atoms, e.g. in the Ti layers. The DFT calculations for L10 FePd also show that the current DFT approximations insufficiently describe the interaction between Fe-Fe (3d-3d), Fe-Pd (3d-4d) and Pd-Pd (4d-4d) electrons, which indicates the necessity to evaluate applicability of different DFT approximations, and also provides experimental data for the development of new DFT approximation that better describes transition metal based intermetallic systems.
Measuring The Neutron Lifetime to One Second Using in Beam Techniques
NASA Astrophysics Data System (ADS)
Mulholland, Jonathan; NIST In Beam Lifetime Collaboration
2013-10-01
The decay of the free neutron is the simplest nuclear beta decay and is the prototype for charged current semi-leptonic weak interactions. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The technical improvements in the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.
BDS/GPS Dual Systems Positioning Based on the Modified SR-UKF Algorithm
Kong, JaeHyok; Mao, Xuchu; Li, Shaoyuan
2016-01-01
The Global Navigation Satellite System can provide all-day three-dimensional position and speed information. Currently, only using the single navigation system cannot satisfy the requirements of the system’s reliability and integrity. In order to improve the reliability and stability of the satellite navigation system, the positioning method by BDS and GPS navigation system is presented, the measurement model and the state model are described. Furthermore, the modified square-root Unscented Kalman Filter (SR-UKF) algorithm is employed in BDS and GPS conditions, and analysis of single system/multi-system positioning has been carried out, respectively. The experimental results are compared with the traditional estimation results, which show that the proposed method can perform highly-precise positioning. Especially when the number of satellites is not adequate enough, the proposed method combine BDS and GPS systems to achieve a higher positioning precision. PMID:27153068
Multiplexed protein measurement: technologies and applications of protein and antibody arrays
Kingsmore, Stephen F.
2006-01-01
The ability to measure the abundance of many proteins precisely and simultaneously in experimental samples is an important, recent advance for static and dynamic, as well as descriptive and predictive, biological research. The value of multiplexed protein measurement is being established in applications such as comprehensive proteomic surveys, studies of protein networks and pathways, validation of genomic discoveries and clinical biomarker development. As standards do not yet exist that bridge all of these applications, the current recommended best practice for validation of results is to approach study design in an iterative process and to integrate data from several measurement technologies. This review describes current and emerging multiplexed protein measurement technologies and their applications, and discusses the remaining challenges in this field. PMID:16582876
Precision half-life measurement of 11C: The most precise mirror transition F t value
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.
2018-03-01
Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.
Fundamental limits of scintillation detector timing precision
NASA Astrophysics Data System (ADS)
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-07-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10 000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A-1/2 more than any other factor, we tabulated the parameter B, where R = BA-1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10 000 photoelectrons ns-1. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10 000 photoelectrons ns-1.
Fundamental Limits of Scintillation Detector Timing Precision
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-01-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A−1/2 more than any other factor, we tabulated the parameter B, where R = BA−1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons/ns. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons/ns. PMID:24874216
A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory
NASA Astrophysics Data System (ADS)
Guo, Jiarong
2017-04-01
A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).
Multilayer Semiconductor Charged-Particle Spectrometers for Accelerator Experiments
NASA Astrophysics Data System (ADS)
Gurov, Yu. B.; Lapushkin, S. V.; Sandukovsky, V. G.; Chernyshev, B. A.
2018-03-01
The current state of studies in the field of development of multilayer semiconductor systems (semiconductor detector (SCD) telescopes), which allow the energy to be precisely measured within a large dynamic range (from a few to a few hundred MeV) and the particles to be identified in a wide mass range (from pions to multiply charged nuclear fragments), is presented. The techniques for manufacturing the SCD telescopes from silicon and high-purity germanium are described. The issues of measuring characteristics of the constructed detectors and their impact on the energy resolution of the SCD telescopes and on the quality of the experimental data are considered. Much attention is given to the use of the constructed semiconductor devices in experimental studies at accelerators of PNPI (Gatchina), LANL (Los Alamos) and CELSIUS (Uppsala).
Precision and manufacturing at the Lawrence Livermore National Laboratory
NASA Technical Reports Server (NTRS)
Saito, Theodore T.; Wasley, Richard J.; Stowers, Irving F.; Donaldson, Robert R.; Thompson, Daniel C.
1994-01-01
Precision Engineering is one of the Lawrence Livermore National Laboratory's core strengths. This paper discusses the past and present current technology transfer efforts of LLNL's Precision Engineering program and the Livermore Center for Advanced Manufacturing and Productivity (LCAMP). More than a year ago the Precision Machine Commercialization project embodied several successful methods of transferring high technology from the National Laboratories to industry. Currently, LCAMP has already demonstrated successful technology transfer and is involved in a broad spectrum of current programs. In addition, this paper discusses other technologies ripe for future transition including the Large Optics Diamond Turning Machine.
Precision and manufacturing at the Lawrence Livermore National Laboratory
NASA Astrophysics Data System (ADS)
Saito, Theodore T.; Wasley, Richard J.; Stowers, Irving F.; Donaldson, Robert R.; Thompson, Daniel C.
1994-02-01
Precision Engineering is one of the Lawrence Livermore National Laboratory's core strengths. This paper discusses the past and present current technology transfer efforts of LLNL's Precision Engineering program and the Livermore Center for Advanced Manufacturing and Productivity (LCAMP). More than a year ago the Precision Machine Commercialization project embodied several successful methods of transferring high technology from the National Laboratories to industry. Currently, LCAMP has already demonstrated successful technology transfer and is involved in a broad spectrum of current programs. In addition, this paper discusses other technologies ripe for future transition including the Large Optics Diamond Turning Machine.
Precision control of multiple quantum cascade lasers for calibration systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taubman, Matthew S., E-mail: Matthew.Taubman@pnnl.gov; Myers, Tanya L.; Pratt, Richard M.
We present a precision, 1-A, digitally interfaced current controller for quantum cascade lasers, with demonstrated temperature coefficients for continuous and 40-kHz full-depth square-wave modulated operation, of 1–2 ppm/ °C and 15 ppm/ °C, respectively. High precision digital to analog converters (DACs) together with an ultra-precision voltage reference produce highly stable, precision voltages, which are selected by a multiplexer (MUX) chip to set output currents via a linear current regulator. The controller is operated in conjunction with a power multiplexing unit, allowing one of three lasers to be driven by the controller, while ensuring protection of controller and all lasers during operation, standby,more » and switching. Simple ASCII commands sent over a USB connection to a microprocessor located in the current controller operate both the controller (via the DACs and MUX chip) and the power multiplexer.« less
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
Continuous-waveform constant-current isolated physiological stimulator
NASA Astrophysics Data System (ADS)
Holcomb, Mark R.; Devine, Jack M.; Harder, Rene; Sidorov, Veniamin Y.
2012-04-01
We have developed an isolated continuous-waveform constant-current physiological stimulator that is powered and controlled by universal serial bus (USB) interface. The stimulator is composed of a custom printed circuit board (PCB), 16-MHz MSP430F2618 microcontroller with two integrated 12-bit digital to analog converters (DAC0, DAC1), high-speed H-Bridge, voltage-controlled current source (VCCS), isolated USB communication and power circuitry, two isolated transistor-transistor logic (TTL) inputs, and a serial 16 × 2 character liquid crystal display. The stimulators are designed to produce current stimuli in the range of ±15 mA indefinitely using a 20V source and to be used in ex vivo cardiac experiments, but they are suitable for use in a wide variety of research or student experiments that require precision control of continuous waveforms or synchronization with external events. The device was designed with customization in mind and has features that allow it to be integrated into current and future experimental setups. Dual TTL inputs allow replacement by two or more traditional stimulators in common experimental configurations. The MSP430 software is written in C++ and compiled with IAR Embedded Workbench 5.20.2. A control program written in C++ runs on a Windows personal computer and has a graphical user interface that allows the user to control all aspects of the device.
Eddy Current Rail Inspection Using AC Bridge Techniques.
Liu, Ze; Koffman, Andrew D; Waltrip, Bryan C; Wang, Yicheng
2013-01-01
AC bridge techniques commonly used for precision impedance measurements have been adapted to develop an eddy current sensor for rail defect detection. By using two detection coils instead of just one as in a conventional sensor, we can balance out the large baseline signals corresponding to a normal rail. We have significantly enhanced the detection sensitivity of the eddy current method by detecting and demodulating the differential signal of the two coils induced by rail defects, using a digital lock-in amplifier algorithm. We have also explored compensating for the lift-off effect of the eddy current sensor due to vibrations by using the summing signal of the detection coils to measure the lift-off distance. The dominant component of the summing signal is a constant resulting from direct coupling from the excitation coil, which can be experimentally determined. The remainder of the summing signal, which decreases as the lift-off distance increases, is induced by the secondary eddy current. This dependence on the lift-off distance is used to calibrate the differential signal, allowing for a more accurate characterization of the defects. Simulated experiments on a sample rail have been performed using a computer controlled X-Y moving table with the X-axis mimicking the train's motion and the Y-axis mimicking the train's vibrational bumping. Experimental results demonstrate the effectiveness of the new detection method.
Oxidative induction time -- A review of DSC experimental effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaine, R.L.; Lundgren, C.J.; Harris, M.B.
1997-12-31
Over the past several years, a number of ASTM committees have explored a wide variety of experimental parameters affecting the oxidative induction time (OIT) test method in an attempt to improve its intra- and inter-laboratory precision. These studies have identified test temperature precision as a key parameter affecting OIT precision. Other parameters of importance are oxygen flow rate, specimen size, specimen pan type, oxygen pressure and catalyst effects. The work of Kuck, Bowmer, Riga, Tikuisis and Thomas are reviewed as well as the collective work of ASTM Committees E37, D2, D9 and D35.
Development of an ultrasensitive interferometry system as a key to precision metrology applications
NASA Astrophysics Data System (ADS)
Gohlke, Martin; Schuldt, Thilo; Weise, Dennis; Johann, Ulrich; Peters, Achim; Braxmaier, Claus
2009-06-01
We present a symmetric heterodyne interferometer as a prototype of a highly sensitive translation and tilt measurement system. This compact optical metrology system was developed over the past several years by EADS Astrium (Friedrichshafen) in cooperation with the Humboldt-University (Berlin) and the university of applied science Konstanz (HTWG-Konstanz). The noise performance was tested at frequencies between 10-4 and 3 Hz, the noise levels are below 1 nm/Hz 1/2 for translation and below 1 μrad/Hz1/2, for tilt measurements. For frequencies higher than 10 mHz noise levels below 5pm/Hz1/2 and 4 nrad/Hz1/2 respectively, were demonstrated. Based on this highly sensitive metrology system we also developed a dilatometer for the characterization of the CTE (coefficient of thermal expansion) of various materials, i.e. CFRP (carbon fiber reinforced plastic) or Zerodur. The currently achieved sensitivity of these measurements is better than 10-7 K-1. Future planned applications of the interferometer include ultra-high-precision surface profiling and characterization of actuator noise in low-noise opto-mechanics setups. We will give an overview of the current experimental setup and the latest measurement results.
NASA Astrophysics Data System (ADS)
Powolny, F.; Auffray, E.; Brunner, S. E.; Garutti, E.; Goettlich, M.; Hillemanns, H.; Jarron, P.; Lecoq, P.; Meyer, T.; Schultz-Coulon, H. C.; Shen, W.; Williams, M. C. S.
2011-06-01
Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Ω input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 × 3 mm2 SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 × 3 × 15 mm3 LSO crystal coupled to a SiPM. The measured time coincidence precision and the limitations in the overall timing accuracy are interpreted using Monte Carlo/SPICE simulation, Poisson statistics, and geometric effects of the crystal.
Zhou, Li; Wang, Kui; Li, Qifu; Nice, Edouard C; Zhang, Haiyuan; Huang, Canhua
2016-01-01
Cancer is a common disease that is a leading cause of death worldwide. Currently, early detection and novel therapeutic strategies are urgently needed for more effective management of cancer. Importantly, protein profiling using clinical proteomic strategies, with spectacular sensitivity and precision, offer excellent promise for the identification of potential biomarkers that would direct the development of targeted therapeutic anticancer drugs for precision medicine. In particular, clinical sample sources, including tumor tissues and body fluids (blood, feces, urine and saliva), have been widely investigated using modern high-throughput mass spectrometry-based proteomic approaches combined with bioinformatic analysis, to pursue the possibilities of precision medicine for targeted cancer therapy. Discussed in this review are the current advantages and limitations of clinical proteomics, the available strategies of clinical proteomics for the management of precision medicine, as well as the challenges and future perspectives of clinical proteomics-driven precision medicine for targeted cancer therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
The theory of quantum electrodynamics (QED) is perhaps the most precisely tested physics theory ever conceived. It describes the interaction of charged particles by emitting photons. The most precise prediction of this very precise theory is the magnetic strength of the electron, what physicists call the magnetic moment. Prediction and measurement agree to 12 digits of precision. In this video, Fermilab’s Dr. Don Lincoln talks about this amazing measurement.
NASA Astrophysics Data System (ADS)
Miyazaki, Eiji; Shimazaki, Kazunori; Numata, Osamu; Waki, Miyuki; Yamanaka, Riyo; Kimoto, Yugo
2016-09-01
Outgassing rate measurement, or dynamic outgassing test, is used to obtain outgassing properties of materials, i.e., Total Mass Loss, "TML," and Collected Volatile Condensed Mass, "CVCM." The properties are used as input parameters for executing contamination analysis, e.g., calculating a prediction of deposition mass on a surface in a spacecraft caused by outgassed substances from contaminant sources onboard. It is likely that results obtained by such calculations are affected by the input parameters. Thus, it is important to get a sufficient experimental data set of outgassing rate measurements for extract good outgassing parameters of materials for calculation. As specified in the standard, ASTM E 1559, TML is measured by a QCM sensor kept at cryogenic temperature; CVCMs are measured at certain temperatures. In the present work, the authors propose a new experimental procedure to obtain more precise VCMs from one run of the current test time with the present equipment. That is, two of four CQCMs in the equipment control the temperature to cool step-by-step during the test run. It is expected that the deposition rate, that is sticking coefficient, with respect to temperature could be discovered. As a result, the sticking coefficient can be obtained directly between -50 and 50 degrees C with 5 degrees C step. It looks like the method could be used as an improved procedure for outgassing rate measurement. The present experiment also specified some issues of the new procedure. It will be considered in future work.
Quantum interval-valued probability: Contextuality and the Born rule
NASA Astrophysics Data System (ADS)
Tai, Yu-Tsung; Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr
2018-05-01
We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.
Non-oxidized porous silicon-based power AC switch peripheries.
Menard, Samuel; Fèvre, Angélique; Valente, Damien; Billoué, Jérôme; Gautier, Gaël
2012-10-11
We present in this paper a novel application of porous silicon (PS) for low-power alternating current (AC) switches such as triode alternating current devices (TRIACs) frequently used to control small appliances (fridge, vacuum cleaner, washing machine, coffee makers, etc.). More precisely, it seems possible to benefit from the PS electrical insulation properties to ensure the OFF state of the device. Based on the technological aspects of the most commonly used AC switch peripheries physically responsible of the TRIAC blocking performances (leakage current and breakdown voltage), we suggest to isolate upper and lower junctions through the addition of a PS layer anodically etched from existing AC switch diffusion profiles. Then, we comment the voltage capability of practical samples emanating from the proposed architecture. Thanks to the characterization results of simple Al-PS-Si(P) structures, the experimental observations are interpreted, thus opening new outlooks in the field of AC switch peripheries.
Precision increase in electric drive speed loop of robotic complexes and process lines
NASA Astrophysics Data System (ADS)
Tulegenov, E.; Imanova, A. A.; Platonov, V. V.
2018-05-01
The article presents the principles of synthesis of control structures for highprecision electric drives of robotic complexes and manipulators. It has been theoretically shown and experimentally confirmed that improved characteristics of speed maintenance in the zone of significant overloads are achieved in systems of series excitation. They are achieved due to the redistribution of control signals both in the zone of setting the armature current and in the excitation currents. At the same time, the characteristic of the electromagnetic torque becomes linear because the demagnetizing effect of the armature response is compensated by the setting of the excitation current. It is recommended in those cases when it is necessary to extend the range of speed control with a significant reduction in load to apply structures with two-zone speed control. The regulation of the weakening of the excitation flow is more convenient as a function of the voltage in the armature windings.
Complementarity of Symmetry Tests at the Energy and Intensity Frontiers
NASA Astrophysics Data System (ADS)
Peng, Tao
We studied several symmetries and interactions beyond the Standard Model and their phenomenology in both high energy colliders and low energy experiments. The lepton number conservation is not a fundamental symmetry in Standard Model (SM). The nature of the neutrino depends on whether or not lepton number is violated. Leptogenesis also requires lepton number violation (LNV). So we want to know whether lepton number is a good symmetry or not, and we want to compare the sensitivity of high energy collider and low energy neutrinoless double-beta decay (0nubetabeta) experiments. To do this, We included the QCD running effects, the background analysis, and the long-distance contributions to nuclear matrix elements. Our result shows that the reach of future tonne-scale 0nubetabeta decay experiments generally exceeds the reach of the 14 TeV LHC for a class of simplified models. For a range of heavy particle masses at the TeV scale, the high luminosity 14 TeV LHC and tonne-scale 0nubetabeta decay experiments may provide complementary probles. The 100 TeV collider with a luminosity of 30 ab-1 exceeds the reach of the tonne-scale 0nubetabeta experiments for most of the range of the heavy particle masses at the TeV scale. We considered a non-Abelian kinetic mixing between the Standard Model gauge bosons and a U(1)' gauge group dark photon, with the existence of an SU(2)L scalar triplet. The coupling constant between the dark photon and the SM gauge bosons epsilon is determined by the triplet vacuum expectation value (vev), the scale of the effective theory Lambda, and the effective operator Wiloson coefficient. The triplet vev is constrained to ≤ 4 GeV. By taking the effective operator Wiloson coefficient to be O(1) and Lambda > 1 TeV, we will have a small value of epsilon which is consistent with the experimental constraint. We outlined the possible LHC signatures and recasted the current ATLAS dark photon experimental results into our non-Abelian mixing scenario. We analyzed the QCD corrections to dark matter (DM) interactions with SM quarks and gluons. Because we like to know the new physics at high scale and the effect of the direct detection of DM at low scale, we studied the QCD running for a list of dark matter effective operators. These corrections are important in precision DM physics. Currently little is known about the short-distance physics of DM. We find that the short-distance QCD corrections generate a finite matching correction when integrating out the electroweak gauge bosons. The high precision measurements of electroweak precision observables can provide crucial input in the search for supersymmetry (SUSY) and play an important role in testing the universality of the SM charged current interaction. We studied the SUSY corrections to such observables DeltaCKM and Deltae/mu, with the experimental constraints on the parameter space. Their corrections are generally of order O(10 -4). Future experiments need to reach this precision to search for SUSY using these observables.
A highly versatile and easily configurable system for plant electrophysiology.
Gunsé, Benet; Poschenrieder, Charlotte; Rankl, Simone; Schröeder, Peter; Rodrigo-Moreno, Ana; Barceló, Juan
2016-01-01
In this study we present a highly versatile and easily configurable system for measuring plant electrophysiological parameters and ionic flow rates, connected to a computer-controlled highly accurate positioning device. The modular software used allows easy customizable configurations for the measurement of electrophysiological parameters. Both the operational tests and the experiments already performed have been fully successful and rendered a low noise and highly stable signal. Assembly, programming and configuration examples are discussed. The system is a powerful technique that not only gives precise measuring of plant electrophysiological status, but also allows easy development of ad hoc configurations that are not constrained to plant studies. •We developed a highly modular system for electrophysiology measurements that can be used either in organs or cells and performs either steady or dynamic intra- and extracellular measurements that takes advantage of the easiness of visual object-oriented programming.•High precision accuracy in data acquisition under electrical noisy environments that allows it to run even in a laboratory close to electrical equipment that produce electrical noise.•The system makes an improvement of the currently used systems for monitoring and controlling high precision measurements and micromanipulation systems providing an open and customizable environment for multiple experimental needs.
Are the Origins of Precision Medicine Found in the Corpus Hippocraticum?
Konstantinidou, Meropi K; Karaglani, Makrina; Panagopoulou, Maria; Fiska, Aliki; Chatzaki, Ekaterini
2017-12-01
Precision medicine (PM) is currently placed at the center of global attention following decades of research towards the improvement of medical practice. The subject of this study was to examine whether this trend had emerged earlier, in fact if the fundamentals of PM can be traced back to the ancient Greek era. For this reason, we studied the collection of all the Hippocratic texts, called the Corpus Hippocraticum, using original translations, and attempted an interpretation of the ancient authors in the context of the modern concept of PM. The most important points located in the ancient passages were: (1) medicine in not 'absolute', thus its directions cannot be generalized to everybody, (2) each human body/organism is different and responds differently to therapy; therefore, the same treatment cannot be suitable for everybody and (3) the physician should choose the appropriate treatment, depending on the patients' individual characteristics, such as different health status and life style (activities, diet, etc.). Although the ancient 'precision medicine' is different from its modern description, the latter derived from well-established experimental conclusions, it becomes apparent that there is a common conception, aiming to achieve more effective healing by focusing on the individual.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Dissecting Reactor Antineutrino Flux Calculations
NASA Astrophysics Data System (ADS)
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-01
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
Dissecting Reactor Antineutrino Flux Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-15
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235 U , 239 Pu , 241 Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In our present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238 U contribution as wellmore » as the effective charge and the allowed shape assumption used in the conversion method. Here, we observe that including a shape correction of about + 6 % MeV - 1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.« less
Physical and molecular bases of protein thermal stability and cold adaptation.
Pucci, Fabrizio; Rooman, Marianne
2017-02-01
The molecular bases of thermal and cold stability and adaptation, which allow proteins to remain folded and functional in the temperature ranges in which their host organisms live and grow, are still only partially elucidated. Indeed, both experimental and computational studies fail to yield a fully precise and global physical picture, essentially because all effects are context-dependent and thus quite intricate to unravel. We present a snapshot of the current state of knowledge of this highly complex and challenging issue, whose resolution would enable large-scale rational protein design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lincoln, Don
2018-01-16
The theory of quantum electrodynamics (QED) is perhaps the most precisely tested physics theory ever conceived. It describes the interaction of charged particles by emitting photons. The most precise prediction of this very precise theory is the magnetic strength of the electron, what physicists call the magnetic moment. Prediction and measurement agree to 12 digits of precision. In this video, Fermilabâs Dr. Don Lincoln talks about this amazing measurement.
Physics of leptoquarks in precision experiments and at particle colliders
NASA Astrophysics Data System (ADS)
Doršner, I.; Fajfer, S.; Greljo, A.; Kamenik, J. F.; Košnik, N.
2016-06-01
We present a comprehensive review of physics effects generated by leptoquarks (LQs), i.e., hypothetical particles that can turn quarks into leptons and vice versa, of either scalar or vector nature. These considerations include discussion of possible completions of the Standard Model that contain LQ fields. The main focus of the review is on those LQ scenarios that are not problematic with regard to proton stability. We accordingly concentrate on the phenomenology of light leptoquarks that is relevant for precision experiments and particle colliders. Important constraints on LQ interactions with matter are derived from precision low-energy observables such as electric dipole moments, (g - 2) of charged leptons, atomic parity violation, neutral meson mixing, Kaon, B, and D meson decays, etc. We provide a general analysis of indirect constraints on the strength of LQ interactions with the quarks and leptons to make statements that are as model independent as possible. We address complementary constraints that originate from electroweak precision measurements, top, and Higgs physics. The Higgs physics analysis we present covers not only the most recent but also expected results from the Large Hadron Collider (LHC). We finally discuss direct LQ searches. Current experimental situation is summarized and self-consistency of assumptions that go into existing accelerator-based searches is discussed. A progress in making next-to-leading order predictions for both pair and single LQ productions at colliders is also outlined.
A novel post-arc current measuring equipment based on vacuum arc commutation and arc blow
NASA Astrophysics Data System (ADS)
Liao, Minfu; Ge, Guowei; Duan, Xiongying; Huang, Zhihui
2017-07-01
The paper proposes a novel post-arc current measuring equipment (NPACME), which is based on the vacuum arc commutation and magnetic arc blow. The NPACME is composed of the vacuum circuit breaker (VCB), shunt resistor, protective gap, high-precision current sensor and externally applied transverse magnetic field (ETMF). The prototype of the NPACME is designed and controlled by optical fiber communications. The vacuum arc commutation between the vacuum arc and the shunt resistor with ETMF is investigated. The test platform is established in the synthetic short-circuit test and the vacuum arc is observed by the high speed CMOS camera. The mathematic description of the vacuum arc commutation is obtained. Based on the current commutation characteristic, the parameters of the NPACME are optimized and the post-arc current is measured. The measuring result of the post-arc current is accurate with small interference and the post-arc charge is obtained. The experimental results verify that the NPACME is correct and accurate, which can be used to measure the post-arc characteristic in breaking test.
Physical modeling in geomorphology: are boundary conditions necessary?
NASA Astrophysics Data System (ADS)
Cantelli, A.
2012-12-01
Referring to the physical experimental design in geomorphology, boundary conditions are key elements that determine the quality of the results and therefore the study development. For years engineers have modeled structures, such as dams and bridges, with high precision and excellent results. Until the last decade, a great part of the physical experimental work in geomorphology has been developed with an engineer-like approach, requiring an accurate scaling analysis to determine inflow parameters and initial geometrical conditions. However, during the last decade, the way we have been approaching physical experiments has significantly changed. In particular, boundary conditions and initial conditions are considered unknown factors that need to be discovered during the experiment. This new philosophy leads to a more demanding data acquisition process but relaxes the obligation to a priori know the appropriate input and initial conditions and provides the flexibility to discover those data. Here I am going to present some practical examples of this experimental approach in deepwater geomorphology; some questions about scaling of turbidity currents and a new large experimental facility built at the Universidade Federal do Rio Grande do Sul, Brasil.
Thermal-Error Regime in High-Accuracy Gigahertz Single-Electron Pumping
NASA Astrophysics Data System (ADS)
Zhao, R.; Rossi, A.; Giblin, S. P.; Fletcher, J. D.; Hudson, F. E.; Möttönen, M.; Kataoka, M.; Dzurak, A. S.
2017-10-01
Single-electron pumps based on semiconductor quantum dots are promising candidates for the emerging quantum standard of electrical current. They can transfer discrete charges with part-per-million (ppm) precision in nanosecond time scales. Here, we employ a metal-oxide-semiconductor silicon quantum dot to experimentally demonstrate high-accuracy gigahertz single-electron pumping in the regime where the number of electrons trapped in the dot is determined by the thermal distribution in the reservoir leads. In a measurement with traceability to primary voltage and resistance standards, the averaged pump current over the quantized plateau, driven by a 1-GHz sinusoidal wave in the absence of a magnetic field, is equal to the ideal value of e f within a measurement uncertainty as low as 0.27 ppm.
Experimental Study on the Precise Orbit Determination of the BeiDou Navigation Satellite System
He, Lina; Ge, Maorong; Wang, Jiexian; Wickert, Jens; Schuh, Harald
2013-01-01
The regional service of the Chinese BeiDou satellite navigation system is now in operation with a constellation including five Geostationary Earth Orbit satellites (GEO), five Inclined Geosynchronous Orbit (IGSO) satellites and four Medium Earth Orbit (MEO) satellites. Besides the standard positioning service with positioning accuracy of about 10 m, both precise relative positioning and precise point positioning are already demonstrated. As is well known, precise orbit and clock determination is essential in enhancing precise positioning services. To improve the satellite orbits of the BeiDou regional system, we concentrate on the impact of the tracking geometry and the involvement of MEOs, and on the effect of integer ambiguity resolution as well. About seven weeks of data collected at the BeiDou Experimental Test Service (BETS) network is employed in this experimental study. Several tracking scenarios are defined, various processing schemata are designed and carried out; and then, the estimates are compared and analyzed in detail. The results show that GEO orbits, especially the along-track component, can be significantly improved by extending the tracking network in China along longitude direction, whereas IGSOs gain more improvement if the tracking network extends in latitude. The involvement of MEOs and ambiguity-fixing also make the orbits better. PMID:23529116
Experimental study on the precise orbit determination of the BeiDou navigation satellite system.
He, Lina; Ge, Maorong; Wang, Jiexian; Wickert, Jens; Schuh, Harald
2013-03-01
The regional service of the Chinese BeiDou satellite navigation system is now in operation with a constellation including five Geostationary Earth Orbit satellites (GEO), five Inclined Geosynchronous Orbit (IGSO) satellites and four Medium Earth Orbit (MEO) satellites. Besides the standard positioning service with positioning accuracy of about 10 m, both precise relative positioning and precise point positioning are already demonstrated. As is well known, precise orbit and clock determination is essential in enhancing precise positioning services. To improve the satellite orbits of the BeiDou regional system, we concentrate on the impact of the tracking geometry and the involvement of MEOs, and on the effect of integer ambiguity resolution as well. About seven weeks of data collected at the BeiDou Experimental Test Service (BETS) network is employed in this experimental study. Several tracking scenarios are defined, various processing schemata are designed and carried out; and then, the estimates are compared and analyzed in detail. The results show that GEO orbits, especially the along-track component, can be significantly improved by extending the tracking network in China along longitude direction, whereas IGSOs gain more improvement if the tracking network extends in latitude. The involvement of MEOs and ambiguity-fixing also make the orbits better.
An ϵ' improvement from right-handed currents
Cirigliano, Vincenzo; Dekens, Wouter Gerard; de Vries, Jordy; ...
2017-01-23
Recent lattice QCD calculations of direct CP violation in K L → ππ decays indicate tension with the experimental results. Assuming this tension to be real, we investigate a possible beyond-the-Standard Model explanation via right-handed charged currents. By using chiral perturbation theory in combination with lattice QCD results, we accurately calculate the modification of ϵ'/ϵ induced by right-handed charged currents and extract values of the couplings that are necessary to explain the discrepancy, pointing to a scale around 10–100 TeV. We find that couplings of this size are not in conflict with constraints from other precision experiments, but next-generation hadronicmore » electric dipole moment searches (such as neutron and 225Ra) can falsify this scenario. As a result, we work out in detail a direct link, based on chiral perturbation theory, between CP violation in the kaon sector and electric dipole moments induced by right-handed currents which can be used in future analyses of left-right symmetric models.« less
NASA Astrophysics Data System (ADS)
Yang, Henglong; Chang, Wen-Cheng; Lin, Yu-Hsuan; Chen, Ming-Hong
2017-08-01
The distinguishable and non-distinguishable 6-bit (64) grayscales of green and red organic light-emitting diode (OLED) were experimentally investigated by using high-sensitive photometric instrument. The feasibility of combining external detection system for quality engineering to compensate the grayscale loss based on preset grayscale tables was also investigated by SPICE simulation. The degradation loss of OLED deeply affects image quality as grayscales become inaccurate. The distinguishable grayscales are indicated as those brightness differences and corresponding current increments are differentiable by instrument. The grayscales of OLED in 8-bit (256) or higher may become nondistinguishable as current or voltage increments are in the same order of noise level in circuitry. The distinguishable grayscale tables for individual red, green, blue, and white colors can be experimentally established as preset reference for quality engineering (QE) in which the degradation loss is compensated by corresponding grayscale numbers shown in preset table. The degradation loss of each OLED colors is quantifiable by comparing voltage increments to those in preset grayscale table if precise voltage increments are detectable during operation. The QE of AMOLED can be accomplished by applying updated grayscale tables. Our preliminary simulation result revealed that it is feasible to quantify degradation loss in terms of grayscale numbers by using external detector circuitry.
NASA Astrophysics Data System (ADS)
Varner, Gary Sim
1999-11-01
Utilizing the world's largest sample of resonant y' decays, as measured by the Beijing Experimental Spectrometer (BES) during 1993-1995, a comprehensive study of the hadronic decay modes of the χc (3P1 Charmonium) states has been undertaken. Compared with the data set for the Mark I detector, whose published measurements of many of these hadronic decays have been definitive for almost 20 years, roughly an order of magnitude larger statistics has been obtained. Taking advantage of these larger statistics, many new hadronic decay modes have been discovered, while others have been refined. An array of first observations, improvements, confirmations or limits are reported with respect to current world values. These higher precision and newly discovered decay modes are an excellent testing ground for recent theoretical interest in the contribution of higher Fock states and the color octet mechanism in heavy quarkonium annihilation and subsequent light hadronization. Because these calculations are largely tractable only in two body decays, these are the focus of this dissertation. A comparison of current theoretical calculations and experimental results is presented, indicating the success of these phenomenological advances. Measurements for which there are as yet no suitable theoretical prediction are indicated.
Li, Yan-Liang; Fang, Zhi-Xiang; You, Jing
2013-02-20
A validated method for analyzing Cry proteins is a premise to study the fate and ecological effects of contaminants associated with genetically engineered Bacillus thuringiensis crops. The current study has optimized the extraction method to analyze Cry1Ac protein in soil using a response surface methodology with a three-level-three-factor Box-Behnken experimental design (BBD). The optimum extraction conditions were at 21 °C and 630 rpm for 2 h. Regression analysis showed a good fit of the experimental data to the second-order polynomial model with a coefficient of determination of 0.96. The method was sensitive and precise with a method detection limit of 0.8 ng/g dry weight and relative standard deviations at 7.3%. Finally, the established method was applied for analyzing Cry1Ac protein residues in field-collected soil samples. Trace amounts of Cry1Ac protein were detected in the soils where transgenic crops have been planted for 8 and 12 years.
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Grinyer, G. F.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.
2009-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada. A beam of ˜10^5 ^26Al^m/s was delivered in October 2007 and its decay was observed using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [4pt] [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 79, 055502 (2009).
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Leslie, J. R.
2008-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a beam of ˜10^5 ^26Al^m/s in October 2007. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).
High precision locating control system based on VCM for Talbot lithography
NASA Astrophysics Data System (ADS)
Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song
2016-10-01
Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.
Knudsen Cell Studies of Ti-Al Thermodynamics
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Copland, Evan H.; Mehrotra, Gopal M.; Auping, Judith; Gray, Hugh R. (Technical Monitor)
2002-01-01
In this paper we describe the Knudsen cell technique for measurement of thermodynamic activities in alloys. Numerous experimental details must be adhered to in order to obtain useful experimental data. These include introduction of an in-situ standard, precise temperature measurement, elimination of thermal gradients, and precise cell positioning. Our first design is discussed and some sample data on Ti-Al alloys is presented. The second modification and associated improvements are also discussed.
Effects of experimental design on calibration curve precision in routine analysis
Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816
Spectroscopic Factors From the Single Neutron Pickup Reaction ^64Zn(d,t)
NASA Astrophysics Data System (ADS)
Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Wirth, H.-F.; Herten-Berger, R.
2008-10-01
A great deal of attention has recently been paid towards high precision superallowed β-decay Ft values. With the availability of extremely high precision (<0.1%) experimental data, the precision on Ft is now limited by the ˜1% theoretical corrections.ootnotetextI.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008). This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking correction calculations become more difficult due to the truncated model space. Experimental data is needed to help constrain input parameters for these calculations, and thus experimental spectroscopic factors for these nuclei are important. Preliminary results from the single-nucleon-transfer reaction ^64Zn(d,t)^63Zn will be presented, and the implications for calculations of isospin-symmetry breaking in the superallowed &+circ; decay of ^62Ga will be discussed.
NASA Astrophysics Data System (ADS)
Horton, T. W.; Holdaway, R. N.; Zerbini, A.; Andriolo, A.; Clapham, P. J.
2010-12-01
Determining how animals perform long-distance animal migration remains one of the most enduring and fundamental mysteries of behavioural ecology. It is widely accepted that navigation relative to a reference datum is a fundamental requirement of long-distance return migration between seasonal habitats, and significant experimental research has documented a variety of viable orientation and navigation cues. However, relatively few investigations have attempted to reconcile experimentally determined orientation and navigation capacities of animals with empirical remotely sensed animal track data, leaving most theories of navigation and orientation untested. Here we show, using basic hypothesis testing, that leatherback turtle (Dermochelys coriacea), great white shark (Carcharodon carcharias), arctic tern (Sterna paradisaea), and humpback whale (Megaptera novaeangliae) migration paths are non-randomly distributed in magnetic coordinate space, with local peaks in magnetic coordinate distributions equal to fractional multiples of the angular obliquity of Earth’s axis of rotation. Time series analysis of humpback whale migratory behaviours, including migration initiation, changes in course, and migratory stop-overs, further demonstrate coupling of magnetic and celestial orientation cues during long-distance migration. These unexpected and highly novel results indicate that diverse taxa integrate magnetic and celestial orientation cues during long-distance migration. These results are compatible with a 'map and compass' orientation and navigation system. Humpback whale migration track geometries further indicate a map and compass orientation system is used. Several humpback whale tracks include highly directional segments (Mercator latitude vs. longitude r2>0.99) exceeding 2000 km in length, despite exposure to variable strength (c. 0-1 km/hr) surface cross-currents. Humpback whales appear to be able to compensate for surface current drift. The remarkable directional precision of these humpback whale track segments is far better than the ±25°-40° precision of the avian magnetic compass. The positional and directional orientation data presented suggests signal transduction provides spatial information to migrating animals with better than 1° precision.
High-Precision Measurement of the Ne19 Half-Life and Implications for Right-Handed Weak Currents
NASA Astrophysics Data System (ADS)
Triambak, S.; Finlay, P.; Sumithrarachchi, C. S.; Hackman, G.; Ball, G. C.; Garrett, P. E.; Svensson, C. E.; Cross, D. S.; Garnsworthy, A. B.; Kshetri, R.; Orce, J. N.; Pearson, M. R.; Tardiff, E. R.; Al-Falou, H.; Austin, R. A. E.; Churchman, R.; Djongolov, M. K.; D'Entremont, R.; Kierans, C.; Milovanovic, L.; O'Hagan, S.; Reeve, S.; Sjue, S. K. L.; Williams, S. J.
2012-07-01
We report a precise determination of the Ne19 half-life to be T1/2=17.262±0.007s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.
High-precision measurement of the 19Ne half-life and implications for right-handed weak currents.
Triambak, S; Finlay, P; Sumithrarachchi, C S; Hackman, G; Ball, G C; Garrett, P E; Svensson, C E; Cross, D S; Garnsworthy, A B; Kshetri, R; Orce, J N; Pearson, M R; Tardiff, E R; Al-Falou, H; Austin, R A E; Churchman, R; Djongolov, M K; D'Entremont, R; Kierans, C; Milovanovic, L; O'Hagan, S; Reeve, S; Sjue, S K L; Williams, S J
2012-07-27
We report a precise determination of the (19)Ne half-life to be T(1/2)=17.262±0.007 s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.
A Bayesian technique for improving the sensitivity of the atmospheric neutrino L/E analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blake, A. S. T.; Chapman, J. D.; Thomson, M. A.
Tmore » his paper outlines a method for improving the precision of atmospheric neutrino oscillation measurements. One experimental signature for these oscillations is an observed deficit in the rate of ν μ charged-current interactions with an oscillatory dependence on L ν / E ν , where L ν is the neutrino propagation distance and E mrow is="true"> ν is the neutrino energy. For contained-vertex atmospheric neutrino interactions, the L ν / E ν resolution varies significantly from event to event. he precision of the oscillation measurement can be improved by incorporating information on L ν / E ν resolution into the oscillation analysis. In the analysis presented here, a Bayesian technique is used to estimate the L ν / E ν resolution of observed atmospheric neutrinos on an event-by-event basis. By separating the events into bins of L ν / E ν resolution in the oscillation analysis, a significant improvement in oscillation sensitivity can be achieved.« less
Kehinde, Elijah O.
2013-01-01
The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. PMID:24217224
Kehinde, Elijah O
2013-01-01
The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. © 2013 S. Karger AG, Basel.
Precision Control of Multiple Quantum Cascade Lasers for Calibration Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taubman, Matthew S.; Myers, Tanya L.; Pratt, Richard M.
We present a precision, digitally interfaced current controller for quantum cascade lasers, with demonstrated DC and modulated temperature coefficients of 1- 2 ppm/ºC and 15 ppm/ºC respectively. High linearity digital to analog converters (DACs) together with an ultra-precision voltage reference, produce highly stable, precision voltages. These are in turn selected by a low charge-injection multiplexer (MUX) chip, which are then used to set output currents via a linear current regulator. The controller is operated in conjunction with a power multiplexing unit, allowing one of three lasers to be driven by the controller while ensuring protection of controller and all lasersmore » during operation, standby and switching. Simple ASCII commands sent over a USB connection to a microprocessor located in the current controller operate both the controller (via the DACs and MUX chip) and the power multiplexer.« less
3He(α, γ)7Be cross section in a wide energy range
NASA Astrophysics Data System (ADS)
Szücs, Tamás; Gyürky, György; Halász, Zoltán; Kiss, Gábor Gy.; Fülöp, Zsolt
2018-01-01
The reaction rate of the 3He(α,γ)7 Be reaction is important both in the Big Bang Nucleosynthesis (BBN) and in the Solar hydrogen burning. There have been a lot of experimental and theoretical efforts to determine this reaction rate with high precision. Some long standing issues have been solved by the more precise investigations, like the different S(0) values predicted by the activation and in-beam measurement. However, the recent, more detailed astrophysical model predictions require the reaction rate with even higher precision to unravel new issues like the Solar composition. One way to increase the precision is to provide a comprehensive dataset in a wide energy range, extending the experimental cross section database of this reaction. This paper presents a new cross section measurement between Ecm = 2.5 - 4.4 MeV, in an energy range which extends above the 7Be proton separation threshold.
Constant-current control method of multi-function electromagnetic transmitter.
Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun
2015-02-01
Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.
Constant-current control method of multi-function electromagnetic transmitter
NASA Astrophysics Data System (ADS)
Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun
2015-02-01
Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.
Search for a Scalar Component in the Weak Interaction
NASA Astrophysics Data System (ADS)
Zakoucky, Dalibor; Baczyk, Pavel; Ban, Gilles; Beck, Marcus; Breitenfeldt, Martin; Couratin, Claire; Fabian, Xavier; Finlay, Paul; Flechard, Xavier; Friedag, Peter; Glück, Ferenc; Herlert, Alexander; Knecht, Andreas; Kozlov, Valentin; Lienard, Etienne; Porobic, Tomica; Soti, Gergelj; Tandecki, Michael; Vangorp, Simon; Weinheimer, Christian; Wursten, Elise; Severijns, Nathal
Weak interactions are described by the Standard Model which uses the basic assumption of a pure "V(ector)-A(xial vector)" character for the interaction. However, after more than half a century of model development and experimental testing of its fundamental ingredients, experimental limits for possible admixtures of scalar and/or tensor interactions are still as high as 7%. The WITCH project (Weak Interaction Trap for CHarged particles) at the isotope separator ISOLDE at CERN is trying to probe the structure of the weak interaction in specific low energy β-decays in order to look for possible scalar or tensor components or at least significantly improve the current experimental limits. This worldwide unique experimental setup consisting of a combination of two Penning ion traps and a retardation spectrometer allows to catch, trap and cool the radioactive nuclei provided by the ISOLDE separator, form a cooled and scattering-free radioactive source of β-decaying nuclei and let these nuclei decay at rest. The precise measurement of the shape of the energy spectrum of the recoiling nuclei, the shape of which is very sensitive to the character of the weak interaction, enables searching for a possible admixture of a scalar/tensor component in the dominant vector/axial vector mode. First online measurements with the isotope 35Ar were performed in 2011 and 2012. The current status of the experiment, the data analysis and results as well as extensive simulations will be presented and discussed.
Inferior olive mirrors joint dynamics to implement an inverse controller.
Alvarez-Icaza, Rodrigo; Boahen, Kwabena
2012-10-01
To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.
Roadmap on quantum optical systems
NASA Astrophysics Data System (ADS)
Dumke, Rainer; Lu, Zehuang; Close, John; Robins, Nick; Weis, Antoine; Mukherjee, Manas; Birkl, Gerhard; Hufnagel, Christoph; Amico, Luigi; Boshier, Malcolm G.; Dieckmann, Kai; Li, Wenhui; Killian, Thomas C.
2016-09-01
This roadmap bundles fast developing topics in experimental optical quantum sciences, addressing current challenges as well as potential advances in future research. We have focused on three main areas: quantum assisted high precision measurements, quantum information/simulation, and quantum gases. Quantum assisted high precision measurements are discussed in the first three sections, which review optical clocks, atom interferometry, and optical magnetometry. These fields are already successfully utilized in various applied areas. We will discuss approaches to extend this impact even further. In the quantum information/simulation section, we start with the traditionally successful employed systems based on neutral atoms and ions. In addition the marvelous demonstrations of systems suitable for quantum information is not progressing, unsolved challenges remain and will be discussed. We will also review, as an alternative approach, the utilization of hybrid quantum systems based on superconducting quantum devices and ultracold atoms. Novel developments in atomtronics promise unique access in exploring solid-state systems with ultracold gases and are investigated in depth. The sections discussing the continuously fast-developing quantum gases include a review on dipolar heteronuclear diatomic gases, Rydberg gases, and ultracold plasma. Overall, we have accomplished a roadmap of selected areas undergoing rapid progress in quantum optics, highlighting current advances and future challenges. These exciting developments and vast advances will shape the field of quantum optics in the future.
Precise point positioning with the BeiDou navigation satellite system.
Li, Min; Qu, Lizhong; Zhao, Qile; Guo, Jing; Su, Xing; Li, Xiaotao
2014-01-08
By the end of 2012, China had launched 16 BeiDou-2 navigation satellites that include six GEOs, five IGSOs and five MEOs. This has provided initial navigation and precise pointing services ability in the Asia-Pacific regions. In order to assess the navigation and positioning performance of the BeiDou-2 system, Wuhan University has built up a network of BeiDou Experimental Tracking Stations (BETS) around the World. The Position and Navigation Data Analyst (PANDA) software was modified to determine the orbits of BeiDou satellites and provide precise orbit and satellite clock bias products from the BeiDou satellite system for user applications. This article uses the BeiDou/GPS observations of the BeiDou Experimental Tracking Stations to realize the BeiDou and BeiDou/GPS static and kinematic precise point positioning (PPP). The result indicates that the precision of BeiDou static and kinematic PPP reaches centimeter level. The precision of BeiDou/GPS kinematic PPP solutions is improved significantly compared to that of BeiDou-only or GPS-only kinematic PPP solutions. The PPP convergence time also decreases with the use of combined BeiDou/GPS systems.
Precise Point Positioning with the BeiDou Navigation Satellite System
Li, Min; Qu, Lizhong; Zhao, Qile; Guo, Jing; Su, Xing; Li, Xiaotao
2014-01-01
By the end of 2012, China had launched 16 BeiDou-2 navigation satellites that include six GEOs, five IGSOs and five MEOs. This has provided initial navigation and precise pointing services ability in the Asia-Pacific regions. In order to assess the navigation and positioning performance of the BeiDou-2 system, Wuhan University has built up a network of BeiDou Experimental Tracking Stations (BETS) around the World. The Position and Navigation Data Analyst (PANDA) software was modified to determine the orbits of BeiDou satellites and provide precise orbit and satellite clock bias products from the BeiDou satellite system for user applications. This article uses the BeiDou/GPS observations of the BeiDou Experimental Tracking Stations to realize the BeiDou and BeiDou/GPS static and kinematic precise point positioning (PPP). The result indicates that the precision of BeiDou static and kinematic PPP reaches centimeter level. The precision of BeiDou/GPS kinematic PPP solutions is improved significantly compared to that of BeiDou-only or GPS-only kinematic PPP solutions. The PPP convergence time also decreases with the use of combined BeiDou/GPS systems. PMID:24406856
A critical assessment of Mus musculus gene function prediction using integrated genomic evidence
Peña-Castillo, Lourdes; Tasan, Murat; Myers, Chad L; Lee, Hyunju; Joshi, Trupti; Zhang, Chao; Guan, Yuanfang; Leone, Michele; Pagnani, Andrea; Kim, Wan Kyu; Krumpelman, Chase; Tian, Weidong; Obozinski, Guillaume; Qi, Yanjun; Mostafavi, Sara; Lin, Guan Ning; Berriz, Gabriel F; Gibbons, Francis D; Lanckriet, Gert; Qiu, Jian; Grant, Charles; Barutcuoglu, Zafer; Hill, David P; Warde-Farley, David; Grouios, Chris; Ray, Debajyoti; Blake, Judith A; Deng, Minghua; Jordan, Michael I; Noble, William S; Morris, Quaid; Klein-Seetharaman, Judith; Bar-Joseph, Ziv; Chen, Ting; Sun, Fengzhu; Troyanskaya, Olga G; Marcotte, Edward M; Xu, Dong; Hughes, Timothy R; Roth, Frederick P
2008-01-01
Background: Several years after sequencing the human genome and the mouse genome, much remains to be discovered about the functions of most human and mouse genes. Computational prediction of gene function promises to help focus limited experimental resources on the most likely hypotheses. Several algorithms using diverse genomic data have been applied to this task in model organisms; however, the performance of such approaches in mammals has not yet been evaluated. Results: In this study, a standardized collection of mouse functional genomic data was assembled; nine bioinformatics teams used this data set to independently train classifiers and generate predictions of function, as defined by Gene Ontology (GO) terms, for 21,603 mouse genes; and the best performing submissions were combined in a single set of predictions. We identified strengths and weaknesses of current functional genomic data sets and compared the performance of function prediction algorithms. This analysis inferred functions for 76% of mouse genes, including 5,000 currently uncharacterized genes. At a recall rate of 20%, a unified set of predictions averaged 41% precision, with 26% of GO terms achieving a precision better than 90%. Conclusion: We performed a systematic evaluation of diverse, independently developed computational approaches for predicting gene function from heterogeneous data sources in mammals. The results show that currently available data for mammals allows predictions with both breadth and accuracy. Importantly, many highly novel predictions emerge for the 38% of mouse genes that remain uncharacterized. PMID:18613946
Using polarized positrons to probe physics beyond the standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furletova, Yulia; Mantry, Sonny
A high intensity polarized positron beam, as part of the JLAB 12 GeV program and the proposed electron-ion collider (EIC), can provide a unique opportunity for testing the Standard Model (SM) and probing for new physics. The combination of high luminosity with polarized electrons and positrons incident on protons and deuterons can isolate important effects and distinguish between possible new physics scenarios in a manner that will complement current experimental efforts. Here, a comparison of cross sections between polarized electron and positron beams will allow for an extraction of the poorly known weak neutral current coupling combination 2C 3u -more » C 3d and would complement the proposed plan for a precision extraction of the combination 2C 2u - C d at the EIC. Precision measurements of these neutral weak couplings would constrain new physics scenarios including Leptoquarks, R-parity violating supersymmetry, and electron and quark compositeness. The dependence of the charged current cross section on the longitudinal polarization of the positron beam will provide an independent probe to test the chiral structure of the electroweak interactions. A polarized positron can probe charged lepton flavor violation (CLFV) through a search for e + → τ + transitions in a manner that is independent and complementary to the proposed e - → τ - search at the EIC. A positron beam incident on an electron in a stationary nuclear target will also allow for a dark-photon (A') search via the annihilation process e + + e - → A' + γ.« less
Using polarized positrons to probe physics beyond the standard model
Furletova, Yulia; Mantry, Sonny
2018-05-25
A high intensity polarized positron beam, as part of the JLAB 12 GeV program and the proposed electron-ion collider (EIC), can provide a unique opportunity for testing the Standard Model (SM) and probing for new physics. The combination of high luminosity with polarized electrons and positrons incident on protons and deuterons can isolate important effects and distinguish between possible new physics scenarios in a manner that will complement current experimental efforts. Here, a comparison of cross sections between polarized electron and positron beams will allow for an extraction of the poorly known weak neutral current coupling combination 2C 3u -more » C 3d and would complement the proposed plan for a precision extraction of the combination 2C 2u - C d at the EIC. Precision measurements of these neutral weak couplings would constrain new physics scenarios including Leptoquarks, R-parity violating supersymmetry, and electron and quark compositeness. The dependence of the charged current cross section on the longitudinal polarization of the positron beam will provide an independent probe to test the chiral structure of the electroweak interactions. A polarized positron can probe charged lepton flavor violation (CLFV) through a search for e + → τ + transitions in a manner that is independent and complementary to the proposed e - → τ - search at the EIC. A positron beam incident on an electron in a stationary nuclear target will also allow for a dark-photon (A') search via the annihilation process e + + e - → A' + γ.« less
Using polarized positrons to probe physics beyond the standard model
NASA Astrophysics Data System (ADS)
Furletova, Yulia; Mantry, Sonny
2018-05-01
A high intensity polarized positron beam, as part of the JLAB 12 GeV program and the proposed electron-ion collider (EIC), can provide a unique opportunity for testing the Standard Model (SM) and probing for new physics. The combination of high luminosity with polarized electrons and positrons incident on protons and deuterons can isolate important effects and distinguish between possible new physics scenarios in a manner that will complement current experimental efforts. A comparison of cross sections between polarized electron and positron beams will allow for an extraction of the poorly known weak neutral current coupling combination 2C3u - C3d and would complement the proposed plan for a precision extraction of the combination 2C2u - Cd at the EIC. Precision measurements of these neutral weak couplings would constrain new physics scenarios including Leptoquarks, R-parity violating supersymmetry, and electron and quark compositeness. The dependence of the charged current cross section on the longitudinal polarization of the positron beam will provide an independent probe to test the chiral structure of the electroweak interactions. A polarized positron can probe charged lepton flavor violation (CLFV) through a search for e+ → τ+ transitions in a manner that is independent and complementary to the proposed e- → τ- search at the EIC. A positron beam incident on an electron in a stationary nuclear target will also allow for a dark-photon (A') search via the annihilation process e+ + e- → A' + γ.
NASA Astrophysics Data System (ADS)
Gong, X.; Wu, Q.
2017-12-01
Network virtual instrument (VI) is a new development direction in current automated test. Based on LabVIEW, the software and hardware system of VI used for emission spectrum of pulsed high-voltage direct current (DC) discharge is developed and applied to investigate pulsed high-voltage DC discharge of nitrogen. By doing so, various functions are realized including real time collection of emission spectrum of nitrogen, monitoring operation state of instruments and real time analysis and processing of data. By using shared variables and DataSocket technology in LabVIEW, the network VI system based on field VI is established. The system can acquire the emission spectrum of nitrogen in the test site, monitor operation states of field instruments, realize real time face-to-face interchange of two sites, and analyze data in the far-end from the network terminal. By employing the network VI system, the staff in the two sites acquired the same emission spectrum of nitrogen and conducted the real time communication. By comparing with the previous results, it can be seen that the experimental data obtained by using the system are highly precise. This implies that the system shows reliable network stability and safety and satisfies the requirements for studying the emission spectrum of pulsed high-voltage discharge in high-precision fields or network terminals. The proposed architecture system is described and the target group gets the useful enlightenment in many fields including engineering remote users, specifically in control- and automation-related tasks.
The tracking analysis in the Q-weak experiment
NASA Astrophysics Data System (ADS)
Pan, J.; Androic, D.; Armstrong, D. S.; Asaturyan, A.; Averett, T.; Balewski, J.; Beaufait, J.; Beminiwattha, R. S.; Benesch, J.; Benmokhtar, F.; Birchall, J.; Carlini, R. D.; Cates, G. D.; Cornejo, J. C.; Covrig, S.; Dalton, M. M.; Davis, C. A.; Deconinck, W.; Diefenbach, J.; Dowd, J. F.; Dunne, J. A.; Dutta, D.; Duvall, W. S.; Elaasar, M.; Falk, W. R.; Finn, J. M.; Forest, T.; Gaskell, D.; Gericke, M. T. W.; Grames, J.; Gray, V. M.; Grimm, K.; Guo, F.; Hoskins, J. R.; Johnston, K.; Jones, D.; Jones, M.; Jones, R.; Kargiantoulakis, M.; King, P. M.; Korkmaz, E.; Kowalski, S.; Leacock, J.; Leckey, J.; Lee, A. R.; Lee, J. H.; Lee, L.; MacEwan, S.; Mack, D.; Magee, J. A.; Mahurin, R.; Mammei, J.; Martin, J. W.; McHugh, M. J.; Meekins, D.; Mei, J.; Michaels, R.; Micherdzinska, A.; Mkrtchyan, A.; Mkrtchyan, H.; Morgan, N.; Myers, K. E.; Narayan, A.; Ndukum, L. Z.; Nelyubin, V.; Nuruzzaman; van Oers, W. T. H.; Opper, A. K.; Page, S. A.; Pan, J.; Paschke, K. D.; Phillips, S. K.; Pitt, M. L.; Poelker, M.; Rajotte, J. F.; Ramsay, W. D.; Roche, J.; Sawatzky, B.; Seva, T.; Shabestari, M. H.; Silwal, R.; Simicevic, N.; Smith, G. R.; Solvignon, P.; Spayde, D. T.; Subedi, A.; Subedi, R.; Suleiman, R.; Tadevosyan, V.; Tobias, W. A.; Tvaskis, V.; Waidyawansa, B.; Wang, P.; Wells, S. P.; Wood, S. A.; Yang, S.; Young, R. D.; Zhamkochyan, S.
2016-12-01
The Q-weak experiment at Jefferson Laboratory measured the parity violating asymmetry ( A P V ) in elastic electron-proton scattering at small momentum transfer squared ( Q 2=0.025 ( G e V/ c)2), with the aim of extracting the proton's weak charge ({Q^p_W}) to an accuracy of 5 %. As one of the major uncertainty contribution sources to {Q^p_W}, Q 2 needs to be determined to ˜1 % so as to reach the proposed experimental precision. For this purpose, two sets of high resolution tracking chambers were employed in the experiment, to measure tracks before and after the magnetic spectrometer. Data collected by the tracking system were then reconstructed with dedicated software into individual electron trajectories for experimental kinematics determination. The Q-weak kinematics and the analysis scheme for tracking data are briefly described here. The sources that contribute to the uncertainty of Q 2 are discussed, and the current analysis status is reported.
The International Linear Collider
NASA Astrophysics Data System (ADS)
List, Benno
2014-04-01
The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Transmission electron diffraction determination of the Ge(001)-(2 × 1) surface structure
NASA Astrophysics Data System (ADS)
Collazo-Davila, C.; Grozea, D.; Landree, E.; Marks, L. D.
1997-04-01
The lateral displacements in the Ge(001)-(2 × 1) surface reconstruction have been determined using transmission electron diffraction (TED). The best-fit model includes displacements extending six layers into the bulk. The atomic positions found agree with X-ray studies to within a few hundredths of an ångström. With the positions determined so precisely, it is suggested that the Ge(001)-(2 × 1) surface can now serve as a standard for comparison with theoretical surface structure calculations. The results from the currently available theoretical studies on the surface are compared with the experimentally determined structure.
Mechanism and experimental research on ultra-precision grinding of ferrite
NASA Astrophysics Data System (ADS)
Ban, Xinxing; Zhao, Huiying; Dong, Longchao; Zhu, Xueliang; Zhang, Chupeng; Gu, Yawen
2017-02-01
Ultra-precision grinding of ferrite is conducted to investigate the removal mechanism. Effect of the accuracy of machine tool key components on grinding surface quality is analyzed. The surface generation model of ferrite ultra-precision grinding machining is established. In order to reveal the surface formation mechanism of ferrite in the process of ultraprecision grinding, furthermore, the scientific and accurate of the calculation model are taken into account to verify the grinding surface roughness, which is proposed. Orthogonal experiment is designed using the high precision aerostatic turntable and aerostatic spindle for ferrite which is a typical hard brittle materials. Based on the experimental results, the influence factors and laws of ultra-precision grinding surface of ferrite are discussed through the analysis of the surface roughness. The results show that the quality of ferrite grinding surface is the optimal parameters, when the wheel speed of 20000r/mm, feed rate of 10mm/min, grinding depth of 0.005mm, and turntable rotary speed of 5r/min, the surface roughness Ra can up to 75nm.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsman, A.; Horbatsch, M.; Hessels, E. A., E-mail: hessels@yorku.ca
2015-09-15
For many decades, improvements in both theory and experiment of the fine structure of the n = 2 triplet P levels of helium have allowed for an increasingly precise determination of the fine-structure constant. Recently, it has been observed that quantum-mechanical interference between neighboring resonances can cause significant shifts, even if such neighboring resonances are separated by thousands of natural widths. The shifts depend in detail on the experimental method used for the measurement, as well as the specific experimental parameters employed. Here, we review how these shifts apply for the most precise measurements of the helium 2{sup 3}P fine-structuremore » intervals.« less
Dynamic alignment, tolerances, and metrology fundamentals at the nano and micro scales
NASA Astrophysics Data System (ADS)
Silberman, Donn M.
2015-09-01
Although the terms "micropositioning" and "nanopositioning" refer to different classes of positioning systems, "nanopositioning" is often used mistakenly to describe micropositioning systems. Micropositioning systems are typically motor-driven stages with travel ranges of a few millimeters up to a few hundred millimeters. Because the guiding systems in such stages — usually bearings of some kind — generate frictional forces, their resolution and repeatability are typically limited to 0.1 μm. The guiding system working principle also adds errors that are typically in the micrometer range. Nanopositioning systems are typically based on frictionless drives and guiding systems such as piezo actuators and flexures. These systems can achieve resolutions and guiding accuracies down to the sub-nanometer level. Both of these classes of precision positioning and motion systems are used extensively in precision optical and photonic systems to achieve desired performance specifications of instruments and experimental research projects. Currently, many precision positioning and motion systems have been design and implemented to cross over from the micro to the nano ranges with excellent results. This paper will describe some of the fundamental performance parameters and tolerances typical of these systems, some of the metrology used to confirm specifications and a few high end applications of general interest.
Ye, Yusen; Gao, Lin; Zhang, Shihua
2017-01-01
Transcription factors play a key role in transcriptional regulation of genes and determination of cellular identity through combinatorial interactions. However, current studies about combinatorial regulation is deficient due to lack of experimental data in the same cellular environment and extensive existence of data noise. Here, we adopt a Bayesian CANDECOMP/PARAFAC (CP) factorization approach (BCPF) to integrate multiple datasets in a network paradigm for determining precise TF interaction landscapes. In our first application, we apply BCPF to integrate three networks built based on diverse datasets of multiple cell lines from ENCODE respectively to predict a global and precise TF interaction network. This network gives 38 novel TF interactions with distinct biological functions. In our second application, we apply BCPF to seven types of cell type TF regulatory networks and predict seven cell lineage TF interaction networks, respectively. By further exploring the dynamics and modularity of them, we find cell lineage-specific hub TFs participate in cell type or lineage-specific regulation by interacting with non-specific TFs. Furthermore, we illustrate the biological function of hub TFs by taking those of cancer lineage and blood lineage as examples. Taken together, our integrative analysis can reveal more precise and extensive description about human TF combinatorial interactions. PMID:29033978
Ye, Yusen; Gao, Lin; Zhang, Shihua
2017-01-01
Transcription factors play a key role in transcriptional regulation of genes and determination of cellular identity through combinatorial interactions. However, current studies about combinatorial regulation is deficient due to lack of experimental data in the same cellular environment and extensive existence of data noise. Here, we adopt a Bayesian CANDECOMP/PARAFAC (CP) factorization approach (BCPF) to integrate multiple datasets in a network paradigm for determining precise TF interaction landscapes. In our first application, we apply BCPF to integrate three networks built based on diverse datasets of multiple cell lines from ENCODE respectively to predict a global and precise TF interaction network. This network gives 38 novel TF interactions with distinct biological functions. In our second application, we apply BCPF to seven types of cell type TF regulatory networks and predict seven cell lineage TF interaction networks, respectively. By further exploring the dynamics and modularity of them, we find cell lineage-specific hub TFs participate in cell type or lineage-specific regulation by interacting with non-specific TFs. Furthermore, we illustrate the biological function of hub TFs by taking those of cancer lineage and blood lineage as examples. Taken together, our integrative analysis can reveal more precise and extensive description about human TF combinatorial interactions.
Shaping of nested potentials for electron cooling of highly-charged ions in a cooler Penning trap
NASA Astrophysics Data System (ADS)
Paul, Stefan; Kootte, Brian; Lascar, Daniel; Gwinner, Gerald; Dilling, Jens; Titan Collaboration
2016-09-01
TRIUMF's Ion Trap for Atomic and Nuclear science (TITAN) is dedicated to mass spectrometry and decay spectroscopy of short-lived radioactive nuclides in a series of ion traps including a precision Penning trap. In order to boost the achievable precision of mass measurements TITAN deploys an Electron Beam Ion Trap (EBIT) providing Highly-Charged Ions (HCI). However, the charge breeding process in the EBIT leads to an increase in the ion bunch's energy spread which is detrimental to the overall precision gain. To reduce this effect a new cylindrical Cooler PEnning Trap (CPET) is being commissioned to sympathetically cool the HCI via a simultaneously trapped electron plasma. Simultaneous trapping of ions and electrons requires a high level of control over the nested potential landscape and sophisticated switching schemes for the voltages on CPET's multiple ring electrodes. For this purpose, we are currently setting up a new experimental control system for multi-channel voltage switching. The control system employs a Raspberry Pi communicating with a digital-to-analog board via a serial peripheral interface. We report on the implementation of the voltage control system and its performance with respect to electron and ion manipulation in CPET. University of British Columbia, Vancouver, BC, Canada.
Precise Temperature Mapping of GaN-Based LEDs by Quantitative Infrared Micro-Thermography
Chang, Ki Soo; Yang, Sun Choel; Kim, Jae-Young; Kook, Myung Ho; Ryu, Seon Young; Choi, Hae Young; Kim, Geon Hee
2012-01-01
A method of measuring the precise temperature distribution of GaN-based light-emitting diodes (LEDs) by quantitative infrared micro-thermography is reported. To reduce the calibration error, the same measuring conditions were used for both calibration and thermal imaging; calibration was conducted on a highly emissive black-painted area on a dummy sapphire wafer loaded near the LED wafer on a thermoelectric cooler mount. We used infrared thermal radiation images of the black-painted area on the dummy wafer and an unbiased LED wafer at two different temperatures to determine the factors that degrade the accuracy of temperature measurement, i.e., the non-uniform response of the instrument, superimposed offset radiation, reflected radiation, and emissivity map of the LED surface. By correcting these factors from the measured infrared thermal radiation images of biased LEDs, we determined a precise absolute temperature image. Consequently, we could observe from where the local self-heat emerges and how it distributes on the emitting area of the LEDs. The experimental results demonstrated that highly localized self-heating and a remarkable temperature gradient, which are detrimental to LED performance and reliability, arise near the p-contact edge of the LED surface at high injection levels owing to the current crowding effect. PMID:22666050
Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland
2014-02-01
Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1) m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Rogov, A.; Pepyolyshev, Yu.; Carta, M.; d'Angelo, A.
Scintillation detector (SD) is widely used in neutron and gamma-spectrometry in a count mode. The organic scintillators for the count mode of the detector operation are investigated rather well. Usually, they are applied for measurement of amplitude and time distributions of pulses caused by single interaction events of neutrons or gamma's with scintillator material. But in a large area of scientific research scintillation detectors can alternatively be used on a current mode by recording the average current from the detector. For example,the measurements of the neutron pulse shape at the pulsed reactors or another pulsed neutron sources. So as to get a rather large volume of experimental data at pulsed neutron sources, it is necessary to use the current mode detector for registration of fast neutrons. Many parameters of the SD are changed with a transition from an accounting mode to current one. For example, the detector efficiency is different in counting and current modes. Many effects connected with time accuracy become substantial. Besides, for the registration of solely fast neutrons, as must be in many measurements, in the mixed radiation field of the pulsed neutron sources, SD efficiency has to be determined with a gamma-radiation shield present. Here is no calculations or experimental data on SD current mode operation up to now. The response functions of the detectors can be either measured in high-precision reference fields or calculated by a computer simulation. We have used the MCNP code [1] and carried out some experiments for investigation of the plastic performances in a current mode. There are numerous programs performing simulating similar to the MCNP code. For example, for neutrons there are [2-4], for photons - [5-8]. However, all known codes to use (SCINFUL, NRESP4, SANDYL, EGS49) have more stringent restrictions on the source, geometry and detector characteristics. In MCNP code a lot of these restrictions are absent and you need only to write special additions for proton and electron recoil and transfer energy to light output. These code modifications allow taking into account all processes in organic scintillator influence the light yield.
Proceedings of the Workshop on Improvements to Photometry
NASA Technical Reports Server (NTRS)
Borucki, W. J. (Editor); Young, A. T. (Editor)
1984-01-01
The purposes of the workshop were to determine what astronomical problems would benefit by increased photometric precision, determine the current level of precision, identify the processes limiting the precision, and recommend approaches to improving photometric precision. Twenty representatives of the university, industry, and government communities participated. Results and recommendations are discussed.
NASA Astrophysics Data System (ADS)
Villani, Clemente; Balsamo, Domenico; Brunelli, Davide; Benini, Luca
2015-05-01
Monitoring current and voltage waveforms is fundamental to assess the power consumption of a system and to improve its energy efficiency. In this paper we present a smart meter for power consumption which does not need any electrical contact with the load or its conductors, and which can measure both current and voltage. Power metering becomes easier and safer and it is also self-sustainable because an energy harvesting module based on inductive coupling powers the entire device from the output of the current sensor. A low cost 32-bit wireless CPU architecture is used for data filtering and processing, while a wireless transceiver sends data via the IEEE 802.15.4 standard. We describe in detail the innovative contact-less voltage measurement system, which is based on capacitive coupling and on an algorithm that exploits two pre-processing channels. The system self-calibrates to perform precise measurements regardless the cable type. Experimental results demonstrate accuracy in comparison with commercial high-cost instruments, showing negligible deviations.
Ronzitti, Emiliano; Conti, Rossella; Zampini, Valeria; Tanese, Dimitrii; Klapoetke, Nathan; Boyden, Edward S.; Papagiakoumou, Eirini
2017-01-01
Optogenetic neuronal network manipulation promises to unravel a long-standing mystery in neuroscience: how does microcircuit activity relate causally to behavioral and pathological states? The challenge to evoke spikes with high spatial and temporal complexity necessitates further joint development of light-delivery approaches and custom opsins. Two-photon (2P) light-targeting strategies demonstrated in-depth generation of action potentials in photosensitive neurons both in vitro and in vivo, but thus far lack the temporal precision necessary to induce precisely timed spiking events. Here, we show that efficient current integration enabled by 2P holographic amplified laser illumination of Chronos, a highly light-sensitive and fast opsin, can evoke spikes with submillisecond precision and repeated firing up to 100 Hz in brain slices from Swiss male mice. These results pave the way for optogenetic manipulation with the spatial and temporal sophistication necessary to mimic natural microcircuit activity. SIGNIFICANCE STATEMENT To reveal causal links between neuronal activity and behavior, it is necessary to develop experimental strategies to induce spatially and temporally sophisticated perturbation of network microcircuits. Two-photon computer generated holography (2P-CGH) recently demonstrated 3D optogenetic control of selected pools of neurons with single-cell accuracy in depth in the brain. Here, we show that exciting the fast opsin Chronos with amplified laser 2P-CGH enables cellular-resolution targeting with unprecedented temporal control, driving spiking up to 100 Hz with submillisecond onset precision using low laser power densities. This system achieves a unique combination of spatial flexibility and temporal precision needed to pattern optogenetically inputs that mimic natural neuronal network activity patterns. PMID:28972125
Mason, Ann M; Borgert, Christopher J; Bus, James S; Moiz Mumtaz, M; Simmons, Jane Ellen; Sipes, I Glenn
2007-09-01
Risk assessments are enhanced when policy and other decision-makers have access to experimental science designed to specifically inform key policy questions. Currently, our scientific understanding and science policy for environmental mixtures are based largely on extrapolating from and combining data in the observable range of single chemical toxicity to lower environmental concentrations and composition, i.e., using higher dose data to extrapolate and predict lower dose toxicity. There is a growing consensus that the default assumptions underlying those mixtures risk assessments that are conducted in the absence of actual mixtures data rest on an inadequate scientific database. Future scientific research should both build upon the current science and advance toxicology into largely uncharted territory. More precise approaches to better characterize toxicity of mixtures are needed. The Society of Toxicology (SOT) sponsored a series of panels, seminars, and workshops to help catalyze and improve the design and conduct of experimental toxicological research to better inform risk assessors and decision makers. This paper summarizes the activities of the SOT Mixtures Program and serves as the introductory paper to a series of articles in this issue, which hope to inspire innovative research and challenge the status quo.
Bardella, Paolo; Columbo, Lorenzo Luigi; Gioannini, Mariangela
2017-10-16
Optical Frequency Comb (OFC) generated by semiconductor lasers are currently widely used in the extremely timely field of high capacity optical interconnects and high precision spectroscopy. In the last decade, several experimental evidences of spontaneous OFC generation have been reported in single section Quantum Dot (QD) lasers. Here we provide a physical understanding of these self-organization phenomena by simulating the multi-mode dynamics of a single section Fabry-Perot (FP) QD laser using a Time-Domain Traveling-Wave (TDTW) model that properly accounts for coherent radiation-matter interaction in the semiconductor active medium and includes the carrier grating generated by the optical standing wave pattern in the laser cavity. We show that the latter is the fundamental physical effect at the origin of the multi-mode spectrum appearing just above threshold. A self-mode-locking regime associated with the emission of OFC is achieved for higher bias currents and ascribed to nonlinear phase sensitive effects as Four Wave Mixing (FWM). Our results explain in detail the behaviour observed experimentally by different research groups and in different QD and Quantum Dash (QDash) devices.
An accurate coarse-grained model for chitosan polysaccharides in aqueous solution.
Tsereteli, Levan; Grafmüller, Andrea
2017-01-01
Computational models can provide detailed information about molecular conformations and interactions in solution, which is currently inaccessible by other means in many cases. Here we describe an efficient and precise coarse-grained model for long polysaccharides in aqueous solution at different physico-chemical conditions such as pH and ionic strength. The Model is carefully constructed based on all-atom simulations of small saccharides and metadynamics sampling of the dihedral angles in the glycosidic links, which represent the most flexible degrees of freedom of the polysaccharides. The model is validated against experimental data for Chitosan molecules in solution with various degree of deacetylation, and is shown to closely reproduce the available experimental data. For long polymers, subtle differences of the free energy maps of the glycosidic links are found to significantly affect the measurable polymer properties. Therefore, for titratable monomers the free energy maps of the corresponding links are updated according to the current charge of the monomers. We then characterize the microscopic and mesoscopic structural properties of large chitosan polysaccharides in solution for a wide range of solvent pH and ionic strength, and investigate the effect of polymer length and degree and pattern of deacetylation on the polymer properties.
An accurate coarse-grained model for chitosan polysaccharides in aqueous solution
Tsereteli, Levan
2017-01-01
Computational models can provide detailed information about molecular conformations and interactions in solution, which is currently inaccessible by other means in many cases. Here we describe an efficient and precise coarse-grained model for long polysaccharides in aqueous solution at different physico-chemical conditions such as pH and ionic strength. The Model is carefully constructed based on all-atom simulations of small saccharides and metadynamics sampling of the dihedral angles in the glycosidic links, which represent the most flexible degrees of freedom of the polysaccharides. The model is validated against experimental data for Chitosan molecules in solution with various degree of deacetylation, and is shown to closely reproduce the available experimental data. For long polymers, subtle differences of the free energy maps of the glycosidic links are found to significantly affect the measurable polymer properties. Therefore, for titratable monomers the free energy maps of the corresponding links are updated according to the current charge of the monomers. We then characterize the microscopic and mesoscopic structural properties of large chitosan polysaccharides in solution for a wide range of solvent pH and ionic strength, and investigate the effect of polymer length and degree and pattern of deacetylation on the polymer properties. PMID:28732036
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
Personalized medicine and chronic obstructive pulmonary disease.
Wouters, E F M; Wouters, B B R A F; Augustin, I M L; Franssen, F M E
2017-05-01
The current review summarizes ongoing developments in personalized medicine and precision medicine in chronic obstructive pulmonary disease (COPD). Our current approach is far away of personalized management algorithms as current recommendations for COPD are largely based on a reductionist disease description, operationally defined by results of spirometry. Besides precision medicine developments, a personalized medicine approach in COPD is described based on a holistic approach of the patient and considering illness as the consequence of dynamic interactions within and between multiple interacting and self-adjusting systems. Pulmonary rehabilitation is described as a model of personalized medicine. Largely based on current understanding of inflammatory processes in COPD, targeted interventions in COPD are reviewed. Augmentation therapy for α-1-antitrypsine deficiency is described as model of precision medicine in COPD based in profound understanding of the related genetic endotype. Future developments of precision medicine in COPD require identification of relevant endotypes combined with proper identification of phenotypes involved in the complex and heterogeneous manifestations of COPD.
Novel linear piezoelectric motor for precision position stage
NASA Astrophysics Data System (ADS)
Chen, Chao; Shi, Yunlai; Zhang, Jun; Wang, Junshan
2016-03-01
Conventional servomotor and stepping motor face challenges in nanometer positioning stages due to the complex structure, motion transformation mechanism, and slow dynamic response, especially directly driven by linear motor. A new butterfly-shaped linear piezoelectric motor for linear motion is presented. A two-degree precision position stage driven by the proposed linear ultrasonic motor possesses a simple and compact configuration, which makes the system obtain shorter driving chain. Firstly, the working principle of the linear ultrasonic motor is analyzed. The oscillation orbits of two driving feet on the stator are produced successively by using the anti-symmetric and symmetric vibration modes of the piezoelectric composite structure, and the slider pressed on the driving feet can be propelled twice in only one vibration cycle. Then with the derivation of the dynamic equation of the piezoelectric actuator and transient response model, start-upstart-up and settling state characteristics of the proposed linear actuator is investigated theoretically and experimentally, and is applicable to evaluate step resolution of the precision platform driven by the actuator. Moreover the structure of the two-degree position stage system is described and a special precision displacement measurement system is built. Finally, the characteristics of the two-degree position stage are studied. In the closed-loop condition the positioning accuracy of plus or minus <0.5 μm is experimentally obtained for the stage propelled by the piezoelectric motor. A precision position stage based the proposed butterfly-shaped linear piezoelectric is theoretically and experimentally investigated.
Łabaj, Paweł P; Leparc, Germán G; Linggi, Bryan E; Markillie, Lye Meng; Wiley, H Steven; Kreil, David P
2011-07-01
Measurement precision determines the power of any analysis to reliably identify significant signals, such as in screens for differential expression, independent of whether the experimental design incorporates replicates or not. With the compilation of large-scale RNA-Seq datasets with technical replicate samples, however, we can now, for the first time, perform a systematic analysis of the precision of expression level estimates from massively parallel sequencing technology. This then allows considerations for its improvement by computational or experimental means. We report on a comprehensive study of target identification and measurement precision, including their dependence on transcript expression levels, read depth and other parameters. In particular, an impressive recall of 84% of the estimated true transcript population could be achieved with 331 million 50 bp reads, with diminishing returns from longer read lengths and even less gains from increased sequencing depths. Most of the measurement power (75%) is spent on only 7% of the known transcriptome, however, making less strongly expressed transcripts harder to measure. Consequently, <30% of all transcripts could be quantified reliably with a relative error<20%. Based on established tools, we then introduce a new approach for mapping and analysing sequencing reads that yields substantially improved performance in gene expression profiling, increasing the number of transcripts that can reliably be quantified to over 40%. Extrapolations to higher sequencing depths highlight the need for efficient complementary steps. In discussion we outline possible experimental and computational strategies for further improvements in quantification precision. rnaseq10@boku.ac.at
Clarke, Patrick J F; Branson, Sonya; Chen, Nigel T M; Van Bockstaele, Bram; Salemink, Elske; MacLeod, Colin; Notebaert, Lies
2017-12-01
Attention bias modification (ABM) procedures have shown promise as a therapeutic intervention, however current ABM procedures have proven inconsistent in their ability to reliably achieve the requisite change in attentional bias needed to produce emotional benefits. This highlights the need to better understand the precise task conditions that facilitate the intended change in attention bias in order to realise the therapeutic potential of ABM procedures. Based on the observation that change in attentional bias occurs largely outside conscious awareness, the aim of the current study was to determine if an ABM procedure delivered under conditions likely to preclude explicit awareness of the experimental contingency, via the addition of a working memory load, would contribute to greater change in attentional bias. Bias change was assessed among 122 participants in response to one of four ABM tasks given by the two experimental factors of ABM training procedure delivered either with or without working memory load, and training direction of either attend-negative or avoid-negative. Findings revealed that avoid-negative ABM procedure under working memory load resulted in significantly greater reductions in attentional bias compared to the equivalent no-load condition. The current findings will require replication with clinical samples to determine the utility of the current task for achieving emotional benefits. These present findings are consistent with the position that the addition of a working memory load may facilitate change in attentional bias in response to an ABM training procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Verification technology of remote sensing camera satellite imaging simulation based on ray tracing
NASA Astrophysics Data System (ADS)
Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun
2017-08-01
Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.
Solar axion search technique with correlated signals from multiple detectors
Xu, Wenqin; Elliott, Steven R.
2017-01-25
The coherent Bragg scattering of photons converted from solar axions inside crystals would boost the signal for axion-photon coupling enhancing experimental sensitivity for these hypothetical particles. Knowledge of the scattering angle of solar axions with respect to the crystal lattice is required to make theoretical predications of signal strength. Hence, both the lattice axis angle within a crystal and the absolute angle between the crystal and the Sun must be known. In this paper, we examine how the experimental sensitivity changes with respect to various experimental parameters. We also demonstrate that, in a multiple-crystal setup, knowledge of the relative axismore » orientation between multiple crystals can improve the experimental sensitivity, or equivalently, relax the precision on the absolute solar angle measurement. However, if absolute angles of all crystal axes are measured, we find that a precision of 2°–4° will suffice for an energy resolution of σ E = 0.04E and a flat background. Lastly, we also show that, given a minimum number of detectors, a signal model averaged over angles can substitute for precise crystal angular measurements, with some loss of sensitivity.« less
NASA Astrophysics Data System (ADS)
Choi, S. G.; Kim, S. H.; Choi, W. K.; Moon, G. C.; Lee, E. S.
2017-06-01
Shape memory alloy (SMA) is important material used for the medicine and aerospace industry due to its characteristics called the shape memory effect, which involves the recovery of deformed alloy to its original state through the application of temperature or stress. Consumers in modern society demand stability in parts. Electrochemical machining is one of the methods for obtained these stabilities in parts requirements. These parts of shape memory alloy require fine patterns in some applications. In order to machine a fine pattern, the electrochemical machining method is suitable. For precision electrochemical machining using different shape electrodes, the current density should be controlled precisely. And electrode shape is required for precise electrochemical machining. It is possible to obtain precise square holes on the SMA if the insulation layer controlled the unnecessary current between electrode and workpiece. If it is adjusting the unnecessary current to obtain the desired shape, it will be a great contribution to the medical industry and the aerospace industry. It is possible to process a desired shape to the shape memory alloy by micro controlling the unnecessary current. In case of the square electrode without insulation layer, it derives inexact square holes due to the unnecessary current. The results using the insulated electrode in only side show precise square holes. The removal rate improved in case of insulated electrode than others because insulation layer concentrate the applied current to the machining zone.
EDDIX--a database of ionisation double differential cross sections.
MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H
2011-02-01
The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441
Experimentally probing topological order and its breakdown through modular matrices
NASA Astrophysics Data System (ADS)
Luo, Zhihuang; Li, Jun; Li, Zhaokai; Hung, Ling-Yan; Wan, Yidun; Peng, Xinhua; Du, Jiangfeng
2018-02-01
The modern concept of phases of matter has undergone tremendous developments since the first observation of topologically ordered states in fractional quantum Hall systems in the 1980s. In this paper, we explore the following question: in principle, how much detail of the physics of topological orders can be observed using state of the art technologies? We find that using surprisingly little data, namely the toric code Hamiltonian in the presence of generic disorders and detuning from its exactly solvable point, the modular matrices--characterizing anyonic statistics that are some of the most fundamental fingerprints of topological orders--can be reconstructed with very good accuracy solely by experimental means. This is an experimental realization of these fundamental signatures of a topological order, a test of their robustness against perturbations, and a proof of principle--that current technologies have attained the precision to identify phases of matter and, as such, probe an extended region of phase space around the soluble point before its breakdown. Given the special role of anyonic statistics in quantum computation, our work promises myriad applications both in probing and realistically harnessing these exotic phases of matter.
Precision Medicine in Gastrointestinal Pathology.
Wang, David H; Park, Jason Y
2016-05-01
-Precision medicine is the promise of individualized therapy and management of patients based on their personal biology. There are now multiple global initiatives to perform whole-genome sequencing on millions of individuals. In the United States, an early program was the Million Veteran Program, and a more recent proposal in 2015 by the president of the United States is the Precision Medicine Initiative. To implement precision medicine in routine oncology care, genetic variants present in tumors need to be matched with effective clinical therapeutics. When we focus on the current state of precision medicine for gastrointestinal malignancies, it becomes apparent that there is a mixed history of success and failure. -To present the current state of precision medicine using gastrointestinal oncology as a model. We will present currently available targeted therapeutics, promising new findings in clinical genomic oncology, remaining quality issues in genomic testing, and emerging oncology clinical trial designs. -Review of the literature including clinical genomic studies on gastrointestinal malignancies, clinical oncology trials on therapeutics targeted to molecular alterations, and emerging clinical oncology study designs. -Translating our ability to sequence thousands of genes into meaningful improvements in patient survival will be the challenge for the next decade.
Precision manipulation with a dextrous robot hand
NASA Astrophysics Data System (ADS)
Michelman, Paul
1994-01-01
In this thesis, we discuss a framework for describing and synthesizing precision manipulation tasks with a robot hand. Precision manipulations are those in which the motions of grasped objects are caused by finger motions alone (as distinct from arm or wrist motion). Experiments demonstrating the capabilities of the Utah-MIT hand are presented. This work begins by examining current research on biological motor control to raise a number of questions. For example, is the control centralized and organized by a central processor? Or is the control distributed throughout the nervous system? Motor control research on manipulation has focused on developing classifications of hand motions, concentrating solely on finger motions, while neglecting grasp stability and interaction forces that occur in manipulation. In addition, these taxonomies have not been explicitly functional. This thesis defines and analyzes a basic set of manipulation strategies that includes both position and force trajectories. The fundamental purposes of the manipulations are: (1) rectilinear and rotational motion of grasped objects of different geometries; and (2) the application of forces and moments against the environment by the grasped objects. First, task partitioning is described to allocate the fingers their roles in the task. Second, for each strategy, the mechanics and workspace of the tasks are analyzed geometrically to determine the gross finger trajectories required to achieve the tasks. Techniques illustrating the combination of simple manipulations into complex, multiple degree-of-freedom tasks are presented. There is a discussion of several tasks that use multiple elementary strategies. The tasks described are removing the top of a childproof medicine bottle, putting the top back on, rotating and regrasping a block and a cylinder within the grasp. Finally, experimental results are presented. The experimental setup at Columbia University's Center for Research in Intelligent Systems and experiments with a Utah-MIT hand is discussed. First, the overall system design is described. Two hybrid position/force controllers were designed and built. After a discussion of the entire system, experimental results are presented describing each of the basic manipulation and complex manipulation strategies.
Determination of the top quark mass circa 2013: methods, subtleties, perspectives
NASA Astrophysics Data System (ADS)
Juste, Aurelio; Mantry, Sonny; Mitov, Alexander; Penin, Alexander; Skands, Peter; Varnes, Erich; Vos, Marcel; Wimpenny, Stephen
2014-10-01
We present an up-to-date overview of the problem of top quark mass determination. We assess the need for precision in the top mass extraction in the LHC era together with the main theoretical and experimental issues arising in precision top mass determination. We collect and document existing results on top mass determination at hadron colliders and map the prospects for future precision top mass determination at e+e- colliders. We present a collection of estimates for the ultimate precision of various methods for top quark mass extraction at the LHC.
Gaytan, Francisco; Morales, Concepción; Leon, Silvia; Heras, Violeta; Barroso, Alexia; Avendaño, Maria S.; Vazquez, Maria J.; Castellano, Juan M.; Roa, Juan; Tena-Sempere, Manuel
2017-01-01
Puberty is a key developmental event whose primary regulatory mechanisms remain poorly understood. Precise dating of puberty is crucial for experimental (preclinical) studies on its complex neuroendocrine controlling networks. In female laboratory rodents, external signs of puberty, such as vaginal opening (VO) and epithelial cell cornification (i.e., first vaginal estrus, FE), are indirectly related to the maturational state of the ovary and first ovulation, which is the unequivocal marker of puberty. Whereas in rats, VO and FE are almost simultaneous with the first ovulation, these events are not so closely associated in mice. Moreover, external signs of puberty can be uncoupled with first ovulation in both species under certain experimental conditions. We propose herein the Pubertal Ovarian Maturation Score (Pub-score), as novel, reliable method to assess peripubertal ovarian maturation in rats and mice. This method is founded on histological evaluation of pre-pubertal ovarian maturation, based on antral follicle development, and the precise timing of first ovulation, by retrospective dating of maturational and regressive changes in corpora lutea. This approach allows exact timing of puberty within a time-window of at least two weeks after VO in both species, thus facilitating the identification and precise dating of advanced or delayed puberty under various experimental conditions. PMID:28401948
Improving precision of forage yield trials: A case study
USDA-ARS?s Scientific Manuscript database
Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
Pollock, George G.
1997-01-01
Two power supplies are combined to control a furnace. A main power supply heats the furnace in the traditional manner, while the power from the auxiliary supply is introduced as a current flow through charged particles existing due to ionized gas or thermionic emission. The main power supply provides the bulk heating power and the auxiliary supply provides a precise and fast power source such that the precision of the total power delivered to the furnace is improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornelis de Jager
The experimental and theoretical status of elastic electron scattering from the nucleon is reviewed. As a consequence of new experimental facilities, data of unprecedented precision have recently become available for the electromagnetic and the strange form factors of the nucleon.
A constraint on antigravity of antimatter from precision spectroscopy of simple atoms
NASA Astrophysics Data System (ADS)
Karshenboim, S. G.
2009-10-01
Consideration of antigravity for antiparticles is an attractive target for various experimental projects. There are a number of theoretical arguments against it but it is not quite clear what kind of experimental data and theoretical suggestions are involved. In this paper we present straightforward arguments against a possibility of antigravity based on a few simple theoretical suggestions and some experimental data. The data are: astrophysical data on rotation of the Solar System in respect to the center of our galaxy and precision spectroscopy data on hydrogen and positronium. The theoretical suggestions for the case of absence of the gravitational field are: equality of electron and positron mass and equality of proton and positron charge. We also assume that QED is correct at the level of accuracy where it is clearly confirmed experimentally.
Dielectronic recombination experiments at the storage rings: From the present CSR to the future HIAF
NASA Astrophysics Data System (ADS)
Huang, Z. K.; Wen, W. Q.; Xu, X.; Wang, H. B.; Dou, L. J.; Chuai, X. Y.; Zhu, X. L.; Zhao, D. M.; Li, J.; Ma, X. M.; Mao, L. J.; Yang, J. C.; Yuan, Y. J.; Xu, W. Q.; Xie, L. Y.; Xu, T. H.; Yao, K.; Dong, C. Z.; Zhu, L. F.; Ma, X.
2017-10-01
Dielectronic recombination (DR) experiments of highly charged ions at the storage rings have been developed as a precision spectroscopic tool to investigate the atomic structure as well as nuclear properties of stable and unstable nuclei. The DR experiment on lithium-like argon ions was successfully performed at main Cooler Storage Ring (CSRm) at Heavy Ion Research Facility in Lanzhou (HIRFL) accelerator complex. The DR experiments on heavy highly charged ions and even radioactive ions are currently under preparation at the experimental Cooler Storage Ring (CSRe) at HIRFL. The current status of DR experiments at the CSRm and the preparation of the DR experiments at the CSRe are presented. In addition, an overview of DR experiments by employing an electron cooler and a separated ultra-cold electron target at the upcoming High Intensity heavy ion Accelerator Facility (HIAF) will be given.
Non-solenoidal startup and low-β operations in Pegasus
NASA Astrophysics Data System (ADS)
Schlossberg, D. J.; Battaglia, D. J.; Bongard, M. W.; Fonck, R. J.; Redd, A. J.
2009-11-01
Non-solenoidal startup using point-source DC helicity injectors (plasma guns) has been achieved in the Pegasus Toroidal Experiment for plasmas with Ip in excess of 100 kA using Iinj<4,A. The maximum achieved Ip tentatively scales as √ITFIinj/w, where w is the radial thickness of the gun-driven edge. The Ip limits appear to conform to a simple stationary model involving helicity conservation and Taylor relaxation. However, observed MHD activity reveals the additional dynamics of the relaxation process, evidenced by intermittent bursts of n=1 activity correlated with rapid redistribution of the current channel. Recent upgrades to the gun system provide higher helicity injection rates, smaller w, a more constrained gun current path, and more precise diagnostics. Experimental goals include extending parametric scaling studies, determining the conditions where parallel conduction losses dominate the helicity dissipation, and building the physics understanding of helicity injection to confidently design gun systems for larger, future tokamaks.
Distributed condition monitoring techniques of optical fiber composite power cable in smart grid
NASA Astrophysics Data System (ADS)
Sun, Zhihui; Liu, Yuan; Wang, Chang; Liu, Tongyu
2011-11-01
Optical fiber composite power cable such as optical phase conductor (OPPC) is significant for the development of smart grid. This paper discusses the distributed cable condition monitoring techniques of the OPPC, which adopts embedded single-mode fiber as the sensing medium. By applying optical time domain reflection and laser Raman scattering, high-resolution spatial positioning and high-precision distributed temperature measurement is executed. And the OPPC cable condition parameters including temperature and its location, current carrying capacity, and location of fracture and loss can be monitored online. OPPC cable distributed condition monitoring experimental system is set up, and the main parts including pulsed fiber laser, weak Raman signal reception, high speed acquisition and cumulative average processing, temperature demodulation and current carrying capacity analysis are introduced. The distributed cable condition monitoring techniques of the OPPC is significant for power transmission management and security.
Muon g-2 at Fermilab: Magnetic Field Preparations for a New Physics Search
NASA Astrophysics Data System (ADS)
Kiburg, Brendan; Muon g-2 Collaboration
2016-03-01
The Muon g - 2 experiment at Fermilab will measure the muon's anomalous magnetic moment, aμ, to 140 parts-per-billion. Modern calculations for aμ differ from the current experimental value by 3.6 σ. Our effort will test this discrepancy by collecting 20 times more muons and implementing several upgrades to the well-established storage ring technique. The experiment utilizes a superconducting electromagnet with a 7-meter radius and a uniform 1.45-Tesla magnetic field to store ~104 muons at a time. The times, energies, and locations of the subsequent decay positrons are determined and combined with magnetic field measurements to extract aμ. This talk will provide a brief snapshot of the current discrepancy. The role and requirements of the precision magnetic field will be described. Recent progress to establish the required magnetic field uniformity will be highlighted.
An overview of quantitative approaches in Gestalt perception.
Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H
2016-09-01
Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Field design factors affecting the precision of ryegrass forage yield estimation
USDA-ARS?s Scientific Manuscript database
Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...
Precision Mass Property Measurements Using a Five-Wire Torsion Pendulum
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
A method for measuring the moment of inertia of an object using a five-wire torsion pendulum design is described here. Typical moment of inertia measurement devices are capable of 1 part in 10(exp 3) accuracy and current state of the art techniques have capabilities of about one part in 10(exp 4). The five-wire apparatus design shows the prospect of improving on current state of the art. Current measurements using a laboratory prototype indicate a moment of inertia measurement precision better than a part in 10(exp 4). In addition, the apparatus is shown to be capable of measuring the mass center offset from the geometric center. Typical mass center measurement devices exhibit a measurement precision up to approximately 1 micrometer. Although the five-wire pendulum was not originally designed for mass center measurements, preliminary results indicate an apparatus with a similar design may have the potential of achieving state of the art precision.
Precision Measurement of Distribution of Film Thickness on Pendulum for Experiment of G
NASA Astrophysics Data System (ADS)
Liu, Lin-Xia; Guan, Sheng-Guo; Liu, Qi; Zhang, Ya-Ting; Shao, Cheng-Gang; Luo, Jun
2009-09-01
Distribution of film thickness coated on the pendulum of measuring the Newton gravitational constant G is determined with a weighing method by means of a precision mass comparator. The experimental result shows that the gold film on the pendulum will contribute a correction of -24.3 ppm to our G measurement with an uncertainty of 4.3 ppm, which is significant for improving the G value with high precision.
Wireless inertial measurement of head kinematics in freely-moving rats
Pasquet, Matthieu O.; Tihy, Matthieu; Gourgeon, Aurélie; Pompili, Marco N.; Godsil, Bill P.; Léna, Clément; Dugué, Guillaume P.
2016-01-01
While miniature inertial sensors offer a promising means for precisely detecting, quantifying and classifying animal behaviors, versatile inertial sensing devices adapted for small, freely-moving laboratory animals are still lacking. We developed a standalone and cost-effective platform for performing high-rate wireless inertial measurements of head movements in rats. Our system is designed to enable real-time bidirectional communication between the headborne inertial sensing device and third party systems, which can be used for precise data timestamping and low-latency motion-triggered applications. We illustrate the usefulness of our system in diverse experimental situations. We show that our system can be used for precisely quantifying motor responses evoked by external stimuli, for characterizing head kinematics during normal behavior and for monitoring head posture under normal and pathological conditions obtained using unilateral vestibular lesions. We also introduce and validate a novel method for automatically quantifying behavioral freezing during Pavlovian fear conditioning experiments, which offers superior performance in terms of precision, temporal resolution and efficiency. Thus, this system precisely acquires movement information in freely-moving animals, and can enable objective and quantitative behavioral scoring methods in a wide variety of experimental situations. PMID:27767085
Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin
2007-02-01
This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.
Lessons from non-canonical splicing
Ule, Jernej
2016-01-01
Recent improvements in experimental and computational techniques used to study the transcriptome have enabled an unprecedented view of RNA processing, revealing many previously unknown non-canonical splicing events. This includes cryptic events located far from the currently annotated exons, and unconventional splicing mechanisms that have important roles in regulating gene expression. These non-canonical splicing events are a major source of newly emerging transcripts during evolution, especially when they involve sequences derived from transposable elements. They are therefore under precise regulation and quality control, which minimises their potential to disrupt gene expression. While non-canonical splicing can lead to aberrant transcripts that cause many diseases, we also explain how it can be exploited for new therapeutic strategies. PMID:27240813
Spatial Phase Coding for Incoherent Optical Processors
NASA Technical Reports Server (NTRS)
Tigin, D. V.; Lavrentev, A. A.; Gary, C. K.
1994-01-01
In this paper we introduce spatial phase coding of incoherent optical signals for representing signed numbers in optical processors and present an experimental demonstration of this coding technique. If a diffraction grating, such as an acousto-optic cell, modulates a stream of light, the image of the grating can be recovered from the diffracted beam. The position of the grating image, or more precisely its phase, can be used to denote the sign of the number represented by the diffracted light. The intensity of the light represents the magnitude of the number. This technique is more economical than current methods in terms of the number of information channels required to represent a number and the amount of post processing required.
Larson, K M; Levine, J
1999-01-01
We have conducted several time-transfer experiments using the phase of the GPS carrier rather than the code, as is done in current GPS-based time-transfer systems. Atomic clocks were connected to geodetic GPS receivers; we then used the GPS carrier-phase observations to estimate relative clock behavior at 6-minute intervals. GPS carrier-phase time transfer is more than an order of magnitude more precise than GPS common view time transfer and agrees, within the experimental uncertainty, with two-way satellite time-transfer measurements for a 2400 km baseline. GPS carrier-phase time transfer has a stability of 100 ps, which translates into a frequency uncertainty of about two parts in 10(-15) for an average time of 1 day.
NASA Astrophysics Data System (ADS)
Sapilewski, Glen Alan
The Satellite Test of the Equivalence Principle (STEP) is a modern version of Galileo's experiment of dropping two objects from the leaning tower of Pisa. The Equivalence Principle states that all objects fall with the same acceleration, independent of their composition. The primary scientific objective of STEP is to measure a possible violation of the Equivalence Principle one million times better than the best ground based tests. This extraordinary sensitivity is made possible by using cryogenic differential accelerometers in the space environment. Critical to the STEP experiment is a sound fundamental understanding of the behavior of the superconducting magnetic linear bearings used in the accelerometers. We have developed a theoretical bearing model and a precision measuring system with which to validate the model. The accelerometers contain two concentric hollow cylindrical test masses, of different materials, each levitated and constrained to axial motion by a superconducting magnetic bearing. Ensuring that the bearings satisfy the stringent mission specifications requires developing new testing apparatus and methods. The bearing is tested using an actively-controlled table which tips it relative to gravity. This balances the magnetic forces from the bearing against a component of gravity. The magnetic force profile of the bearing can be mapped by measuring the tilt necessary to position the test mass at various locations. An operational bearing has been built and is being used to verify the theoretical levitation models. The experimental results obtained from the bearing test apparatus were inconsistent with the previous models used for STEP bearings. This led to the development of a new bearing model that includes the influence of surface current variations in the bearing wires and the effect of the superconducting transformer. The new model, which has been experimentally verified, significantly improves the prediction of levitation current, accurately estimates the relationship between tilting and translational modes, and predicts the dependence of radial mode frequencies on the bearing current. In addition, we developed a new model for the forces produced by trapped magnetic fluxons, a potential source of imperfections in the bearing. This model estimates the forces between magnetic fluxons trapped in separate superconducting objects.
NASA Astrophysics Data System (ADS)
Liu, Ling
The primary goal of this research is the analysis, development, and experimental demonstration of an adaptive phase-locked fiber array system for free-space optical communications and laser beam projection applications. To our knowledge, the developed adaptive phase-locked system composed of three fiber collimators (subapertures) with tip-tilt wavefront phase control at each subaperture represents the first reported fiber array system that implements both phase-locking control and adaptive wavefront tip-tilt control capabilities. This research has also resulted in the following innovations: (a) The first experimental demonstration of a phase-locked fiber array with tip-tilt wave-front aberration compensation at each fiber collimator; (b) Development and demonstration of the fastest currently reported stochastic parallel gradient descent (SPGD) system capable of operation at 180,000 iterations per second; (c) The first experimental demonstration of a laser communication link based on a phase-locked fiber array; (d) The first successful experimental demonstration of turbulence and jitter-induced phase distortion compensation in a phase-locked fiber array optical system; (e) The first demonstration of laser beam projection onto an extended target with a randomly rough surface using a conformal adaptive fiber array system. Fiber array optical systems, the subject of this study, can overcome some of the draw-backs of conventional monolithic large-aperture transmitter/receiver optical systems that are usually heavy, bulky, and expensive. The primary experimental challenges in the development of the adaptive phased-locked fiber-array included precise (<5 microrad) alignment of the fiber collimators and development of fast (100kHz-class) phase-locking and wavefront tip-tilt control systems. The precise alignment of the fiber collimator array is achieved through a specially developed initial coarse alignment tool based on high precision piezoelectric picomotors and a dynamic fine alignment mechanism implemented with specially designed and manufactured piezoelectric fiber positioners. Phase-locking of the fiber collimators is performed by controlling the phases of the output beams (beamlets) using integrated polarization-maintaining (PM) fiber-coupled LiNbO3 phase shifters. The developed phase-locking controllers are based on either the SPGD algorithm or the multi-dithering technique. Subaperture wavefront phase tip-tilt control is realized using piezoelectric fiber positioners that are controlled using a computer-based SPGD controller. Both coherent (phase-locked) and incoherent beam combining in the fiber array system are analyzed theoretically and experimentally. Two special fiber-based beam-combining testbeds have been built to demonstrate the technical feasibility of phase-locking compensation prior to free-space operation. In addition, the reciprocity of counter-propagating beams in a phase-locked fiber array system has been investigated. Coherent beam combining in a phase-locking system with wavefront phase tip-tilt compensation at each subaperture is successfully demonstrated when laboratory-simulated turbulence and wavefront jitters are present in the propagation path of the beamlets. In addition, coherent beam combining with a non-cooperative extended target in the control loop is successfully demonstrated.
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Hradski, Jasna; Chorváthová, Mária Drusková; Bodor, Róbert; Sabo, Martin; Matejčík, Štefan; Masár, Marián
2016-12-01
Although microchip electrophoresis (MCE) is intended to provide reliable quantitative data, so far there is only limited attention paid to these important aspects. This study gives a general overview of key aspects to be followed to reach high-precise determination using isotachophoresis (ITP) on the microchip with conductivity detection. From the application point of view, the procedure for the determination of acetate, a main component in the pharmaceutical preparation buserelin acetate, was developed. Our results document that run-to-run fluctuations in the sample injection volume limit the reproducibility of quantitation based on the external calibration. The use of a suitable internal standard (succinate in this study) improved the repeatability of the precision of acetate determination from six to eight times. The robustness of the procedure was studied in terms of impact of fluctuations in various experimental parameters (driving current, concentration of the leading ions, pH of the leading electrolyte and buffer impurities) on the precision of the ITP determination. The use of computer simulation programs provided means to assess the ITP experiments using well-defined theoretical models. A long-term validity of the calibration curves on two microchips and two MCE equipments was verified. This favors ITP over other microchip electrophoresis techniques, when chip-to-chip or equipment-to-equipment transfer of the analytical method is required. The recovery values in the range of 98-101 % indicate very accurate determination of acetate in buserelin acetate, which is used in the treatment of hormone-dependent tumors. This study showed that microchip ITP is suitable for reliable determination of main components in pharmaceutical preparations.
Modelling dishes and exploring culinary 'precisions': the two issues of molecular gastronomy.
This, Hervé
2005-04-01
The scientific strategy of molecular gastronomy includes modelling 'culinary definitions' and experimental explorations of 'culinary precisions'. A formalism that describes complex dispersed systems leads to a physical classification of classical sauces, as well as to the invention of an infinite number of new dishes.
Laser Vacuum Furnace for Zone Refining
NASA Technical Reports Server (NTRS)
Griner, D. B.; Zurburg, F. W.; Penn, W. M.
1986-01-01
Laser beam scanned to produce moving melt zone. Experimental laser vacuum furnace scans crystalline wafer with high-power CO2-laser beam to generate precise melt zone with precise control of temperature gradients around zone. Intended for zone refining of silicon or other semiconductors in low gravity, apparatus used in normal gravity.
Askari, Sina; Zhang, Mo; Won, Deborah S
2010-01-01
Current methods for assessing the efficacy of treatments for Parkinson's disease (PD) rely on physician rated scores. These methods pose three major shortcomings: 1) the subjectivity of the assessments, 2) the lack of precision on the rating scale (6 discrete levels), and 3) the inability to assess symptoms except under very specific conditions and/or for very specific tasks. To address these shortcomings, a portable system was developed to continuously monitor Parkinsonian symptoms with quantitative measures based on electrical signals from muscle activity (EMG). Here, we present the system design and the implementation of methods for system validation. This system was designed to provide continuous measures of tremor, rigidity, and bradykinesia which are related to the neurophysiological source without the need for multiple bulky experimental apparatuses, thus allowing more precise, quantitative indicators of the symptoms which can be measured during practical daily living tasks. This measurement system has the potential to improve the diagnosis of PD as well as the evaluation of PD treatments, which is an important step in the path to improving PD treatments.
Kim, Dong-Woo; Cho, Myeong-Woo; Seo, Tae-Il; Shin, Young-Jae
2008-01-01
Recently, the magnetorheological (MR) polishing process has been examined as a new ultra-precision polishing technology for micro parts in MEMS applications. In the MR polishing process, the magnetic force plays a dominant role. This method uses MR fluids which contains micro abrasives as a polishing media. The objective of the present research is to shed light onto the material removal mechanism under various slurry conditions for polishing and to investigate surface characteristics, including shape analysis and surface roughness measurement, of spots obtained from the MR polishing process using alumina abrasives. A series of basic experiments were first performed to determine the optimum polishing conditions for BK7 glass using prepared slurries by changing the process parameters, such as wheel rotating speed and electric current. Using the obtained results, groove polishing was then performed and the results are investigated. Outstanding surface roughness of Ra=3.8nm was obtained on the BK7 glass specimen. The present results highlight the possibility of applying this polishing method to ultra-precision micro parts production, especially in MEMS applications. PMID:27879705
Quantum technologies with hybrid systems
Kurizki, Gershon; Bertet, Patrice; Kubo, Yuimaru; Mølmer, Klaus; Petrosyan, David; Rabl, Peter; Schmiedmayer, Jörg
2015-01-01
An extensively pursued current direction of research in physics aims at the development of practical technologies that exploit the effects of quantum mechanics. As part of this ongoing effort, devices for quantum information processing, secure communication, and high-precision sensing are being implemented with diverse systems, ranging from photons, atoms, and spins to mesoscopic superconducting and nanomechanical structures. Their physical properties make some of these systems better suited than others for specific tasks; thus, photons are well suited for transmitting quantum information, weakly interacting spins can serve as long-lived quantum memories, and superconducting elements can rapidly process information encoded in their quantum states. A central goal of the envisaged quantum technologies is to develop devices that can simultaneously perform several of these tasks, namely, reliably store, process, and transmit quantum information. Hybrid quantum systems composed of different physical components with complementary functionalities may provide precisely such multitasking capabilities. This article reviews some of the driving theoretical ideas and first experimental realizations of hybrid quantum systems and the opportunities and challenges they present and offers a glance at the near- and long-term perspectives of this fascinating and rapidly expanding field. PMID:25737558
Quantum technologies with hybrid systems.
Kurizki, Gershon; Bertet, Patrice; Kubo, Yuimaru; Mølmer, Klaus; Petrosyan, David; Rabl, Peter; Schmiedmayer, Jörg
2015-03-31
An extensively pursued current direction of research in physics aims at the development of practical technologies that exploit the effects of quantum mechanics. As part of this ongoing effort, devices for quantum information processing, secure communication, and high-precision sensing are being implemented with diverse systems, ranging from photons, atoms, and spins to mesoscopic superconducting and nanomechanical structures. Their physical properties make some of these systems better suited than others for specific tasks; thus, photons are well suited for transmitting quantum information, weakly interacting spins can serve as long-lived quantum memories, and superconducting elements can rapidly process information encoded in their quantum states. A central goal of the envisaged quantum technologies is to develop devices that can simultaneously perform several of these tasks, namely, reliably store, process, and transmit quantum information. Hybrid quantum systems composed of different physical components with complementary functionalities may provide precisely such multitasking capabilities. This article reviews some of the driving theoretical ideas and first experimental realizations of hybrid quantum systems and the opportunities and challenges they present and offers a glance at the near- and long-term perspectives of this fascinating and rapidly expanding field.
Quantum technologies with hybrid systems
NASA Astrophysics Data System (ADS)
Kurizki, Gershon; Bertet, Patrice; Kubo, Yuimaru; Mølmer, Klaus; Petrosyan, David; Rabl, Peter; Schmiedmayer, Jörg
2015-03-01
An extensively pursued current direction of research in physics aims at the development of practical technologies that exploit the effects of quantum mechanics. As part of this ongoing effort, devices for quantum information processing, secure communication, and high-precision sensing are being implemented with diverse systems, ranging from photons, atoms, and spins to mesoscopic superconducting and nanomechanical structures. Their physical properties make some of these systems better suited than others for specific tasks; thus, photons are well suited for transmitting quantum information, weakly interacting spins can serve as long-lived quantum memories, and superconducting elements can rapidly process information encoded in their quantum states. A central goal of the envisaged quantum technologies is to develop devices that can simultaneously perform several of these tasks, namely, reliably store, process, and transmit quantum information. Hybrid quantum systems composed of different physical components with complementary functionalities may provide precisely such multitasking capabilities. This article reviews some of the driving theoretical ideas and first experimental realizations of hybrid quantum systems and the opportunities and challenges they present and offers a glance at the near- and long-term perspectives of this fascinating and rapidly expanding field.
NASA Technical Reports Server (NTRS)
Gray, Perry; Guven, Ibrahim
2016-01-01
A new facility for making small particle impacts is being developed at NASA. Current sand/particle impact facilities are an erosion test and do not precisely measure and document the size and velocity of each of the impacting particles. In addition, evidence of individual impacts is often obscured by subsequent impacts. This facility will allow the number, size, and velocity of each particle to be measured and adjusted. It will also be possible to determine which particle produced damage at a given location on the target. The particle size and velocity will be measured by high speed imaging techniques. Information as to the extent of damage and debris from impacts will also be recorded. It will be possible to track these secondary particles, measuring size and velocity. It is anticipated that this additional degree of detail will provide input for erosion models and also help determine the impact physics of the erosion process. Particle impacts will be recorded at 90 degrees to the particle flight path and also from the top looking through the target window material.
Current status and future trends of precision agricultural aviation technologies
USDA-ARS?s Scientific Manuscript database
Modern technologies and information tools can be used to maximize agricultural aviation productivity allowing for precision application of agrochemical products. This paper reviews and summarizes the state-of-the-art in precision agricultural aviation technology highlighting remote sensing, aerial s...
USDA-ARS?s Scientific Manuscript database
A number of recent soil biota studies have deviated from the standard experimental approach of generating a distinct data value for each experimental unit (e.g. Yang et al., 2013; Gundale et al., 2014). Instead, these studies have mixed together soils from multiple experimental units (i.e. sites wi...
Performance Evaluation of Real-Time Precise Point Positioning Method
NASA Astrophysics Data System (ADS)
Alcay, Salih; Turgut, Muzeyyen
2017-12-01
Post-Processed Precise Point Positioning (PPP) is a well-known zero-difference positioning method which provides accurate and precise results. After the experimental tests, IGS Real Time Service (RTS) officially provided real time orbit and clock products for the GNSS community that allows real-time (RT) PPP applications. Different software packages can be used for RT-PPP. In this study, in order to evaluate the performance of RT-PPP, 3 IGS stations are used. Results, obtained by using BKG Ntrip Client (BNC) Software v2.12, are examined in terms of both accuracy and precision.
Diode laser spectroscopy: precise spectral line shape measurements
NASA Astrophysics Data System (ADS)
Nadezhdinskii, A. I.
1996-07-01
When one speaks about modern trends in tunable diode laser spectroscopy (TDLS) one should mention that precise line shape measurements have become one of the most promising applications of diode lasers in high resolution molecular spectroscopy. Accuracy limitations of TDL spectrometers are considered in this paper, proving the ability to measure spectral line profile with precision better than 1%. A four parameter Voigt profile is used to fit the experimental spectrum, and the possibility of line shift measurements with an accuracy of 2 × 10 -5 cm -1 is shown. Test experiments demonstrate the error line intensity ratios to be less than 0.3% for the proposed approach. Differences between "soft" and "hard" models of line shape have been observed experimentally for the first time. Some observed resonance effects are considered with respect to collision adiabacity.
Liu, Jen-Pei; Lu, Li-Tien; Liao, C T
2009-09-01
Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.
Tong, Qiaoling; Chen, Chen; Zhang, Qiao; Zou, Xuecheng
2015-01-01
To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm. PMID:25928061
Boeker, Martin; Vach, Werner; Motschall, Edith
2013-10-26
Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools.The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary.
Pollock, G.G.
1997-01-28
Two power supplies are combined to control a furnace. A main power supply heats the furnace in the traditional manner, while the power from the auxiliary supply is introduced as a current flow through charged particles existing due to ionized gas or thermionic emission. The main power supply provides the bulk heating power and the auxiliary supply provides a precise and fast power source such that the precision of the total power delivered to the furnace is improved. 5 figs.
Precision medicine: In need of guidance and surveillance.
Lin, Jian-Zhen; Long, Jun-Yu; Wang, An-Qiang; Zheng, Ying; Zhao, Hai-Tao
2017-07-28
Precision medicine, currently a hotspot in mainstream medicine, has been strongly promoted in recent years. With rapid technological development, such as next-generation sequencing, and fierce competition in molecular targeted drug exploitation, precision medicine represents an advance in science and technology; it also fulfills needs in public health care. The clinical translation and application of precision medicine - especially in the prevention and treatment of tumors - is far from satisfactory; however, the aims of precision medicine deserve approval. Thus, this medical approach is currently in its infancy; it has promising prospects, but it needs to overcome numbers of problems and deficiencies. It is expected that in addition to conventional symptoms and signs, precision medicine will define disease in terms of the underlying molecular characteristics and other environmental susceptibility factors. Those expectations should be realized by constructing a novel data network, integrating clinical data from individual patients and personal genomic background with existing research on the molecular makeup of diseases. In addition, multi-omics analysis and multi-discipline collaboration will become crucial elements in precision medicine. Precision medicine deserves strong support, and its development demands directed momentum. We propose three kinds of impetus (research, application and collaboration impetus) for such directed momentum toward promoting precision medicine and accelerating its clinical translation and application.
Precision medicine: In need of guidance and surveillance
Lin, Jian-Zhen; Long, Jun-Yu; Wang, An-Qiang; Zheng, Ying; Zhao, Hai-Tao
2017-01-01
Precision medicine, currently a hotspot in mainstream medicine, has been strongly promoted in recent years. With rapid technological development, such as next-generation sequencing, and fierce competition in molecular targeted drug exploitation, precision medicine represents an advance in science and technology; it also fulfills needs in public health care. The clinical translation and application of precision medicine - especially in the prevention and treatment of tumors - is far from satisfactory; however, the aims of precision medicine deserve approval. Thus, this medical approach is currently in its infancy; it has promising prospects, but it needs to overcome numbers of problems and deficiencies. It is expected that in addition to conventional symptoms and signs, precision medicine will define disease in terms of the underlying molecular characteristics and other environmental susceptibility factors. Those expectations should be realized by constructing a novel data network, integrating clinical data from individual patients and personal genomic background with existing research on the molecular makeup of diseases. In addition, multi-omics analysis and multi-discipline collaboration will become crucial elements in precision medicine. Precision medicine deserves strong support, and its development demands directed momentum. We propose three kinds of impetus (research, application and collaboration impetus) for such directed momentum toward promoting precision medicine and accelerating its clinical translation and application. PMID:28811702
Joanne Wang, C; Li, Xiong; Lin, Benjamin; Shim, Sangwoo; Ming, Guo-Li; Levchenko, Andre
2008-02-01
Neuronal growth cones contain sophisticated molecular machinery precisely regulating their migration in response to complex combinatorial gradients of diverse external cues. The details of this regulation are still largely unknown, in part due to limitations of the currently available experimental techniques. Microfluidic devices have been shown to be capable of generating complex, stable and precisely controlled chemical gradients, but their use in studying growth cone migration has been limited in part due to the effects of shear stress. Here we describe a microfluidics-based turning-assay chip designed to overcome this issue. In addition to generating precise gradients of soluble guidance cues, the chip can also fabricate complex composite gradients of diffusible and surface-bound guidance cues that mimic the conditions the growth cones realistically counter in vivo. Applying this assay to Xenopus embryonic spinal neurons, we demonstrate that the presence of a surface-bound laminin gradient can finely tune the polarity of growth cone responses (repulsion or attraction) to gradients of brain-derived neurotrophic factor (BDNF), with the guidance outcome dependent on the mean BDNF concentration. The flexibility inherent in this assay holds significant potential for refinement of our understanding of nervous system development and regeneration, and can be extended to elucidate other cellular processes involving chemotaxis of shear sensitive cells.
NASA Astrophysics Data System (ADS)
Huang, Shih-Chiang; Lee, Gwo-Bin; Chien, Fan-Ching; Chen, Shean-Jen; Chen, Wen-Janq; Yang, Ming-Chang
2006-07-01
This paper presents a novel microfluidic system with integrated molecular imprinting polymer (MIP) films designed for surface plasmon resonance (SPR) biosensing of multiple nanoscale biomolecules. The innovative microfluidic chip uses pneumatic microvalves and micropumps to transport a precise amount of the biosample through multiple microchannels to sensing regions containing the locally spin-coated MIP films. The signals of SPR biosensing are basically proportional to the number of molecules adsorbed on the MIP films. Hence, a precise control of flow rates inside microchannels is important to determine the adsorption amount of the molecules in the SPR/MIP chips. The integration of micropumps and microvalves can automate the sample introduction process and precisely control the amount of the sample injection to the microfluidic system. The proposed biochip enables the label-free biosensing of biomolecules in an automatic format, and provides a highly sensitive, highly specific and high-throughput detection performance. Three samples, i.e. progesterone, cholesterol and testosterone, are successfully detected using the developed system. The experimental results show that the proposed SPR/MIP microfluidic chip provides a comparable sensitivity to that of large-scale SPR techniques, but with reduced sample consumption and an automatic format. As such, the developed biochip has significant potential for a wide variety of nanoscale biosensing applications. The preliminary results of the current paper were presented at Transducers 2005, Seoul, Korea, 5-9 June 2005.
NASA Astrophysics Data System (ADS)
Vasilyan, Suren; Rivero, Michel; Schleichert, Jan; Halbedel, Bernd; Fröhlich, Thomas
2016-04-01
In this paper, we present an application for realizing high-precision horizontally directed force measurements in the order of several tens of nN in combination with high dead loads of about 10 N. The set-up is developed on the basis of two identical state-of-the-art electromagnetic force compensation (EMFC) high precision balances. The measurement resolution of horizontally directed single-axis quasi-dynamic forces is 20 nN over the working range of ±100 μN. The set-up operates in two different measurement modes: in the open-loop mode the mechanical deflection of the proportional lever is an indication of the acting force, whereas in the closed-loop mode it is the applied electric current to the coil inside the EMFC balance that compensates deflection of the lever to the offset zero position. The estimated loading frequency (cutoff frequency) of the set-up in the open-loop mode is about 0.18 Hz, in the closed-loop mode it is 0.7 Hz. One of the practical applications that the set-up is suitable for is the flow rate measurements of low electrically conducting electrolytes by applying the contactless technique of Lorentz force velocimetry. Based on a previously developed set-up which uses a single EMFC balance, experimental, theoretical and numerical analyses of the thermo-mechanical properties of the supporting structure are presented.
Melo, Armindo; Ferreira, Isabel M P L V O; Mansilha, Catarina
2015-06-01
This work deals with the optimization of a rapid, cost-effective, and eco-friendly gas chromatography with mass spectrometry method for the simultaneous determination of four endocrine disruptor compounds in water matrices: estrone, 17β-estradiol, 17α-ethinylestradiol, and bisphenol A, that are currently considered to be of main concern in the field of water policy and that could became candidates for future regulations. The method involves simultaneous derivatization and extraction of compounds by dispersive liquid-liquid microextraction followed by gas chromatography with mass spectrometry analysis. Derivatization and extraction parameters were optimized with the aid of experimental design approach. An excellent linear response was achieved for all analytes (r(2) ≥ 0.999). Limits of detection and quantification are 0.003-0.005 and 0.0094-0.0164 μg/L, respectively. Intraday precision ranged between 1.1 and 12.6%, whereas interday precision ranged between 0.5 and 14.7%. For accuracy, bias values varied between -15.0 and 13.7%. Recoveries at three concentration levels ranged from 86.4 to 118.2%. The proposed method can be applied to the routine analysis of groundwater, river, sea, tap, and mineral water samples with excellent sensitivity, precision, and accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Progress in Integrative Biomaterial Systems to Approach Three-Dimensional Cell Mechanotransduction
Zhang, Ying; Liao, Kin; Li, Chuan; Lai, Alvin C.K.; Foo, Ji-Jinn
2017-01-01
Mechanotransduction between cells and the extracellular matrix regulates major cellular functions in physiological and pathological situations. The effect of mechanical cues on biochemical signaling triggered by cell–matrix and cell–cell interactions on model biomimetic surfaces has been extensively investigated by a combination of fabrication, biophysical, and biological methods. To simulate the in vivo physiological microenvironment in vitro, three dimensional (3D) microstructures with tailored bio-functionality have been fabricated on substrates of various materials. However, less attention has been paid to the design of 3D biomaterial systems with geometric variances, such as the possession of precise micro-features and/or bio-sensing elements for probing the mechanical responses of cells to the external microenvironment. Such precisely engineered 3D model experimental platforms pave the way for studying the mechanotransduction of multicellular aggregates under controlled geometric and mechanical parameters. Concurrently with the progress in 3D biomaterial fabrication, cell traction force microscopy (CTFM) developed in the field of cell biophysics has emerged as a highly sensitive technique for probing the mechanical stresses exerted by cells onto the opposing deformable surface. In the current work, we first review the recent advances in the fabrication of 3D micropatterned biomaterials which enable the seamless integration with experimental cell mechanics in a controlled 3D microenvironment. Then, we discuss the role of collective cell–cell interactions in the mechanotransduction of engineered tissue equivalents determined by such integrative biomaterial systems under simulated physiological conditions. PMID:28952551
NASA Astrophysics Data System (ADS)
Wurm, Michael
2017-04-01
More than forty years after the first detection of neutrinos from the Sun, the spectroscopy of solar neutrinos has proven to be an on-going success story. The long-standing puzzle about the observed solar neutrino deficit has been resolved by the discovery of neutrino flavor oscillations. Today's experiments have been able to solidify the standard MSW-LMA oscillation scenario by performing precise measurements over the whole energy range of the solar neutrino spectrum. This article reviews the enabling experimental technologies: On the one hand multi-kiloton-scale water Cherenkov detectors performing measurements in the high-energy regime of the spectrum, on the other end ultrapure liquid-scintillator detectors that allow for a low-threshold analysis. The current experimental results on the fluxes, spectra and time variation of the different components of the solar neutrino spectrum will be presented, setting them in the context of both neutrino oscillation physics and the hydrogen fusion processes embedded in the Standard Solar Model. Finally, the physics potential of state-of-the-art detectors and a next generation of experiments based on novel techniques will be assessed in the context of the most interesting open questions in solar neutrino physics: a precise measurement of the vacuum-matter transition curve of electron-neutrino oscillation probability that offers a definitive test of the basic MSW-LMA scenario or the appearance of new physics; and a first detection of neutrinos from the CNO cycle that will provide new information on solar metallicity and stellar physics.
NASA Astrophysics Data System (ADS)
Zhao, Jianhua; Zhou, Songlin; Lu, Xianghui; Gao, Dianrong
2015-09-01
The double flapper-nozzle servo valve is widely used to launch and guide the equipment. Due to the large instantaneous flow rate of servo valve working under specific operating conditions, the temperature of servo valve would reach 120°C and the valve core and valve sleeve deform in a short amount of time. So the control precision of servo valve significantly decreases and the clamping stagnation phenomenon of valve core appears. In order to solve the problem of degraded control accuracy and clamping stagnation of servo valve under large temperature difference circumstance, the numerical simulation of heat-fluid-solid coupling by using finite element method is done. The simulation result shows that zero position leakage of servo valve is basically impacted by oil temperature and change of fit clearance. The clamping stagnation is caused by warpage-deformation and fit clearance reduction of the valve core and valve sleeve. The distribution rules of the temperature and thermal-deformation of shell, valve core and valve sleeve and the pressure, velocity and temperature field of flow channel are also analyzed. Zero position leakage and electromagnet's current when valve core moves in full-stroke are tested using Electro-hydraulic Servo-valve Characteristic Test-bed of an aerospace sciences and technology corporation. The experimental results show that the change law of experimental current at different oil temperatures is roughly identical to simulation current. The current curve of the electromagnet is smooth when oil temperature is below 80°C, but the amplitude of current significantly increases and the hairy appears when oil temperature is above 80°C. The current becomes smooth again after the warped valve core and valve sleeve are reground. It indicates that clamping stagnation is caused by warpage-deformation and fit clearance reduction of valve core and valve sleeve. This paper simulates and tests the heat-fluid-solid coupling of double flapper-nozzle servo valve, and the obtained results provide the reference value for the design of double flapper-nozzle force feedback servo valve.
Biomarkers of exposure to new and emerging tobacco delivery products.
Schick, Suzaynn F; Blount, Benjamin C; Jacob, Peyton; Saliba, Najat A; Bernert, John T; El Hellani, Ahmad; Jatlow, Peter; Pappas, R Steven; Wang, Lanqing; Foulds, Jonathan; Ghosh, Arunava; Hecht, Stephen S; Gomez, John C; Martin, Jessica R; Mesaros, Clementina; Srivastava, Sanjay; St Helen, Gideon; Tarran, Robert; Lorkiewicz, Pawel K; Blair, Ian A; Kimmel, Heather L; Doerschuk, Claire M; Benowitz, Neal L; Bhatnagar, Aruni
2017-09-01
Accurate and reliable measurements of exposure to tobacco products are essential for identifying and confirming patterns of tobacco product use and for assessing their potential biological effects in both human populations and experimental systems. Due to the introduction of new tobacco-derived products and the development of novel ways to modify and use conventional tobacco products, precise and specific assessments of exposure to tobacco are now more important than ever. Biomarkers that were developed and validated to measure exposure to cigarettes are being evaluated to assess their use for measuring exposure to these new products. Here, we review current methods for measuring exposure to new and emerging tobacco products, such as electronic cigarettes, little cigars, water pipes, and cigarillos. Rigorously validated biomarkers specific to these new products have not yet been identified. Here, we discuss the strengths and limitations of current approaches, including whether they provide reliable exposure estimates for new and emerging products. We provide specific guidance for choosing practical and economical biomarkers for different study designs and experimental conditions. Our goal is to help both new and experienced investigators measure exposure to tobacco products accurately and avoid common experimental errors. With the identification of the capacity gaps in biomarker research on new and emerging tobacco products, we hope to provide researchers, policymakers, and funding agencies with a clear action plan for conducting and promoting research on the patterns of use and health effects of these products.
Timing and efficacy of Ca2+ channel activation in hippocampal mossy fiber boutons.
Bischofberger, Josef; Geiger, Jörg R P; Jonas, Peter
2002-12-15
The presynaptic Ca2+ signal is a key determinant of transmitter release at chemical synapses. In cortical synaptic terminals, however, little is known about the kinetic properties of the presynaptic Ca2+ channels. To investigate the timing and magnitude of the presynaptic Ca2+ inflow, we performed whole-cell patch-clamp recordings from mossy fiber boutons (MFBs) in rat hippocampus. MFBs showed large high-voltage-activated Ca(2+) currents, with a maximal amplitude of approximately 100 pA at a membrane potential of 0 mV. Both activation and deactivation were fast, with time constants in the submillisecond range at a temperature of approximately 23 degrees C. An MFB action potential (AP) applied as a voltage-clamp command evoked a transient Ca2+ current with an average amplitude of approximately 170 pA and a half-duration of 580 microsec. A prepulse to +40 mV had only minimal effects on the AP-evoked Ca2+ current, indicating that presynaptic APs open the voltage-gated Ca2+ channels very effectively. On the basis of the experimental data, we developed a kinetic model with four closed states and one open state, linked by voltage-dependent rate constants. Simulations of the Ca2+ current could reproduce the experimental data, including the large amplitude and rapid time course of the current evoked by MFB APs. Furthermore, the simulations indicate that the shape of the presynaptic AP and the gating kinetics of the Ca2+ channels are tuned to produce a maximal Ca2+ influx during a minimal period of time. The precise timing and high efficacy of Ca2+ channel activation at this cortical glutamatergic synapse may be important for synchronous transmitter release and temporal information processing.
Aspects of the development of ultrabroadband precision directional couplers
NASA Astrophysics Data System (ADS)
Kats, B. M.; Larionov, A. I.; Meshchanov, V. P.
1991-03-01
The synthesis of ultrabroadband coaxial directional couplers (DCs) with improved characteristics is examined. A precision DC with operating ranges of 0.6-12.5 and 1.5-18.0 GHz have been developed and experimentally tested. The device is realized on the basis of coupled coaxial lines of a new type.
NASA Technical Reports Server (NTRS)
Anderson, E. H.; Moore, D. M.; Fanson, J. L.; Ealey, M. A.
1990-01-01
The design and development of a zero stiction active member containing piezoelectric and electrostrictive actuator motors is presented. The active member is intended for use in submicron control of structures. Experimental results are shown which illustrate actuator and device characteristics relevant to precision control applications.
High Precision Pressure Measurement with a Funnel
ERIC Educational Resources Information Center
Lopez-Arias, T.; Gratton, L. M.; Oss, S.
2008-01-01
A simple experimental device for high precision differential pressure measurements is presented. Its working mechanism recalls that of a hydraulic press, where pressure is supplied by insufflating air under a funnel. As an application, we measure air pressure inside a soap bubble. The soap bubble is inflated and connected to a funnel which is…
NASA Technical Reports Server (NTRS)
Wayner, P. C., Jr.; Plawsky, J. L.; Wong, Harris
2004-01-01
The major accomplishments of the experimental portion of the research were documented in Ling Zheng's doctoral dissertation. Using Pentane, he obtained a considerable amount of data on the stability and heat transfer characteristics of an evaporating meniscus. The important points are that experimental equipment to obtain data on the stability and heat transfer characteristics of an evaporating meniscus were built and successfully operated. The data and subsequent analyses were accepted by the Journal of Heat Transfer for publication in 2004 [PU4]. The work was continued by a new graduate student using HFE-7000 [PU3] and then Pentane at lower heat fluxes. The Pentane results are being analyzed for publication. The experimental techniques are currently being used in our other NASA Grant. The oscillation of the contact line observed in the experiments involves evaporation (retraction part) and spreading. Since both processes occur with finite contact angles, it is important to derive a precise equation of the intermolecular forces (disjoining pressure) valid for non-zero contact angles. This theoretical derivation was accepted for publication by Journal of Fluid Mechanics [PU5]. The evaporation process near the contact line is complicated, and an idealized micro heat pipe has been proposed to help in elucidating the detailed evaporation process [manuscripts in preparation].
Experimental quantum simulations of many-body physics with trapped ions.
Schneider, Ch; Porras, Diego; Schaetz, Tobias
2012-02-01
Direct experimental access to some of the most intriguing quantum phenomena is not granted due to the lack of precise control of the relevant parameters in their naturally intricate environment. Their simulation on conventional computers is impossible, since quantum behaviour arising with superposition states or entanglement is not efficiently translatable into the classical language. However, one could gain deeper insight into complex quantum dynamics by experimentally simulating the quantum behaviour of interest in another quantum system, where the relevant parameters and interactions can be controlled and robust effects detected sufficiently well. Systems of trapped ions provide unique control of both the internal (electronic) and external (motional) degrees of freedom. The mutual Coulomb interaction between the ions allows for large interaction strengths at comparatively large mutual ion distances enabling individual control and readout. Systems of trapped ions therefore exhibit a prominent system in several physical disciplines, for example, quantum information processing or metrology. Here, we will give an overview of different trapping techniques of ions as well as implementations for coherent manipulation of their quantum states and discuss the related theoretical basics. We then report on the experimental and theoretical progress in simulating quantum many-body physics with trapped ions and present current approaches for scaling up to more ions and more-dimensional systems.
ERIC Educational Resources Information Center
National Alliance of Business, Inc., Washington, DC.
CertainTeed's Precision Strike training program was designed to close the gaps between the current status of its workplace and where that work force needed to be to compete successfully in global markets. Precision Strike included Skills and Knowledge in Lifelong Learning (SKILL) customized, computerized lessons in basic skills, one-on-one…
Yale High Energy Physics Research: Precision Studies of Reactor Antineutrinos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeger, Karsten M.
2014-09-13
This report presents experimental research at the intensity frontier of particle physics with particular focus on the study of reactor antineutrinos and the precision measurement of neutrino oscillations. The experimental neutrino physics group of Professor Heeger and Senior Scientist Band at Yale University has had leading responsibilities in the construction and operation of the Daya Bay Reactor Antineutrino Experiment and made critical contributions to the discovery of non-zeromore » $$\\theta_{13}$$. Heeger and Band led the Daya Bay detector management team and are now overseeing the operations of the antineutrino detectors. Postdoctoral researchers and students in this group have made leading contributions to the Daya Bay analysis including the prediction of the reactor antineutrino flux and spectrum, the analysis of the oscillation signal, and the precision determination of the target mass yielding unprecedented precision in the relative detector uncertainty. Heeger's group is now leading an R\\&D effort towards a short-baseline oscillation experiment, called PROSPECT, at a US research reactor and the development of antineutrino detectors with advanced background discrimination.« less
Fast and precise thermoregulation system in physiological brain slice experiment
NASA Astrophysics Data System (ADS)
Sheu, Y. H.; Young, M. S.
1995-12-01
We have developed a fast and precise thermoregulation system incorporated within a physiological experiment on a brain slice. The thermoregulation system is used to control the temperature of a recording chamber in which the brain slice is placed. It consists of a single-chip microcomputer, a set command module, a display module, and an FLC module. A fuzzy control algorithm was developed and a fuzzy logic controller then designed for achieving fast, smooth thermostatic performance and providing precise temperature control with accuracy to 0.1 °C, from room temperature through 42 °C (experimental temperature range). The fuzzy logic controller is implemented by microcomputer software and related peripheral hardware circuits. Six operating modes of thermoregulation are offered with the system and this can be further extended according to experimental needs. The test results of this study demonstrate that the fuzzy control method is easily implemented by a microcomputer and also verifies that this method provides a simple way to achieve fast and precise high-performance control of a nonlinear thermoregulation system in a physiological brain slice experiment.
Experimental Demonstration of Higher Precision Weak-Value-Based Metrology Using Power Recycling
NASA Astrophysics Data System (ADS)
Wang, Yi-Tao; Tang, Jian-Shun; Hu, Gang; Wang, Jian; Yu, Shang; Zhou, Zong-Quan; Cheng, Ze-Di; Xu, Jin-Shi; Fang, Sen-Zhi; Wu, Qing-Lin; Li, Chuan-Feng; Guo, Guang-Can
2016-12-01
The weak-value-based metrology is very promising and has attracted a lot of attention in recent years because of its remarkable ability in signal amplification. However, it is suggested that the upper limit of the precision of this metrology cannot exceed that of classical metrology because of the low sample size caused by the probe loss during postselection. Nevertheless, a recent proposal shows that this probe loss can be reduced by the power-recycling technique, and thus enhance the precision of weak-value-based metrology. Here we experimentally realize the power-recycled interferometric weak-value-based beam-deflection measurement and obtain the amplitude of the detected signal and white noise by discrete Fourier transform. Our results show that the detected signal can be strengthened by power recycling, and the power-recycled weak-value-based signal-to-noise ratio can surpass the upper limit of the classical scheme, corresponding to the shot-noise limit. This work sheds light on higher precision metrology and explores the real advantage of the weak-value-based metrology over classical metrology.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Thought-shape fusion in anorexia and bulimia nervosa: a comparative experimental study.
Kostopoulou, Myrsini; Varsou, Eleftheria; Stalikas, Anastassios
2013-09-01
'Thought-shape fusion' (TSF) is a cognitive distortion specific in patients with eating disorders and occurs when the thought about eating a forbidden food increases a person's estimate of her weight/shape, elicits a perception of moral wrongdoing and makes her feel fat. This study aimed to experimentally induce, study and compare TSF between patients with bulimia nervosa (BN) and patients with anorexia nervosa (AN). 31 patients diagnosed with a current eating disorder, of which 20 met DSM-IV-TR criteria for BN and 11 for AN, participated in a mixed-model experimental design with the aim of eliciting TSF and investigating the effects of corrective behaviors (checking and mental neutralizing). Verbal analogue scales constituted the main outcome measures. TSF was experimentally induced and expressed in a similar way in both clinical groups, apart from 'feeling fat' which was higher in BN patients. TSF induction triggered heightened levels of anxiety, guilt and urges to engage in corrective behaviors in both groups. Body dissatisfaction only increased in the BN patients. Mental neutralizing and to a lesser extent checking reduced most effects of the experimental procedure, but this effect was larger for BN patients. The nature of TSF seems to have similarities between BN and AN patients; however, the precise connection between TSF and different types of eating disorders remains to be explored in future clinical trials.
High precision triangular waveform generator
Mueller, Theodore R.
1983-01-01
An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.
High-precision triangular-waveform generator
Mueller, T.R.
1981-11-14
An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.
Griendling, Kathy K.; Touyz, Rhian M.; Zweier, Jay L.; Dikalov, Sergey; Chilian, William; Chen, Yeong-Renn; Harrison, David G.; Bhatnagar, Aruni
2017-01-01
Reactive oxygen species and reactive nitrogen species are biological molecules that play important roles in cardiovascular physiology and contribute to disease initiation, progression, and severity. Because of their ephemeral nature and rapid reactivity, these species are difficult to measure directly with high accuracy and precision. In this statement, we review current methods for measuring these species and the secondary products they generate and suggest approaches for measuring redox status, oxidative stress, and the production of individual reactive oxygen and nitrogen species. We discuss the strengths and limitations of different methods and the relative specificity and suitability of these methods for measuring the concentrations of reactive oxygen and reactive nitrogen species in cells, tissues, and biological fluids. We provide specific guidelines, through expert opinion, for choosing reliable and reproducible assays for different experimental and clinical situations. These guidelines are intended to help investigators and clinical researchers avoid experimental error and ensure high-quality measurements of these important biological species. PMID:27418630
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
The tracking analysis in the Q-weak experiment
Pan, J.; Androic, D.; Armstrong, D. S.; ...
2016-11-21
Here, the Q-weak experiment at Jefferson Laboratory measured the parity violating asymmetry (Amore » $$_{PV}$$ ) in elastic electron-proton scattering at small momentum transfer squared (Q$$^{2}$$=0.025 (G e V/c)$$^{2}$$), with the aim of extracting the proton’s weak charge ( $${Q^p_W}$$ ) to an accuracy of 5 %. As one of the major uncertainty contribution sources to $${Q^p_W}$$ , Q$$^{2}$$ needs to be determined to ~1 % so as to reach the proposed experimental precision. For this purpose, two sets of high resolution tracking chambers were employed in the experiment, to measure tracks before and after the magnetic spectrometer. Data collected by the tracking system were then reconstructed with dedicated software into individual electron trajectories for experimental kinematics determination. The Q-weak kinematics and the analysis scheme for tracking data are briefly described here. The sources that contribute to the uncertainty of Q$$^{2}$$ are discussed, and the current analysis status is reported.« less
A method to identify and analyze biological programs through automated reasoning
Yordanov, Boyan; Dunn, Sara-Jane; Kugler, Hillel; Smith, Austin; Martello, Graziano; Emmott, Stephen
2016-01-01
Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich, but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function. PMID:27668090
Use of promethazine to hasten adaptation to provocative motion
NASA Technical Reports Server (NTRS)
Lackner, J. R.; Graybiel, A.
1994-01-01
In an earlier study, the authors found that severely motion sick individuals could be greatly relieved of their symptoms by intramuscular injections of promethazine (50 mg) or scopolamine (.5 mg). Comparable 50-mg injections of promethazine also have been found effective in alleviating symptoms of space motion sickness. The concern has risen, however, that such drugs may delay or retard the acquisition of adaptation to stressful environments. In the current study, we controlled arousal using a mental arithmetic task and precisely equated the exposure history (number of head movements during rotation) of a placebo, control group and an experimental group who had received promethazine. No differences in total adaptation or in rates of adaptation were present between the two groups. Another experimental group also received promethazine and was allowed to make as many head movements as they could, before reaching nausea, up to 800. This group showed a greater level of adaptation than the placebo group. These results suggest a strategy for dealing with space motion sickness that is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Nagendra
2011-12-09
Despite the widely discussed role of whistler waves in mediating magnetic reconnection (MR), the direct connection between such waves and the MR has not been demonstrated by comparing the characteristic temporal and spatial features of the waves and the MR process. Using the whistler wave dispersion relation, we theoretically predict the experimentally measured rise time ({tau}{sub rise}) of a few microseconds for the fast rising MR rate in the Versatile Toroidal Facility at MIT. The rise time is closely given by the inverse of the frequency bandwidth of the whistler waves generated in the evolving current sheet. The wave frequenciesmore » lie much above the ion cyclotron frequency, but they are limited to less than 0.1% of the electron cyclotron frequency in the argon plasma. The maximum normalized MR rate R=0.35 measured experimentally is precisely predicted by the angular dispersion of the whistler waves.« less
Nano-Al Based Energetics: Rapid Heating Studies and a New Preparation Technique
NASA Astrophysics Data System (ADS)
Sullivan, Kyle; Kuntz, Josh; Gash, Alex; Zachariah, Michael
2011-06-01
Nano-Al based thermites have become an attractive alternative to traditional energetic formulations due to their increased energy density and high reactivity. Understanding the intrinsic reaction mechanism has been a difficult task, largely due to the lack of experimental techniques capable of rapidly and uniform heating a sample (~104- 108 K/s). The current work presents several studies on nano-Al based thermites, using rapid heating techniques. A new mechanism termed a Reactive Sintering Mechanism is proposed for nano-Al based thermites. In addition, new experimental techniques for nanocomposite thermite deposition onto thin Pt electrodes will be discussed. This combined technique will offer more precise control of the deposition, and will serve to further our understanding of the intrinsic reaction mechanism of rapidly heated energetic systems. An improved mechanistic understanding will lead to the development of optimized formulations and architectures. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Application of photon Doppler velocimetry to direct impact Hopkinson pressure bars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lea, Lewis J., E-mail: ll379@cam.ac.uk; Jardine, Andrew P.
2016-02-15
Direct impact Hopkinson pressure bar systems offer many potential advantages over split Hopkinson pressure bars, including access to higher strain rates, higher strains for equivalent striker velocity and system length, lower dispersion, and faster achievement of force equilibrium. Currently, these advantages are gained at the expense of all information about the striker impacted specimen face, preventing the experimental determination of force equilibrium, and requiring approximations to be made on the sample deformation history. In this paper, we discuss an experimental method and complementary data analysis for using photon Doppler velocimetry to measure surface velocities of the striker and output barsmore » in a direct impact bar experiment, allowing similar data to be recorded as in a split bar system. We discuss extracting velocity and force measurements, and the precision of measurements. Results obtained using the technique are compared to equivalent split bar tests, showing improved stress measurements for the lowest and highest strains in fully dense metals, and improvement for all strains in slow and non-equilibrating materials.« less
A Guide for the Design of Pre-clinical Studies on Sex Differences in Metabolism.
Mauvais-Jarvis, Franck; Arnold, Arthur P; Reue, Karen
2017-06-06
In animal models, the physiological systems involved in metabolic homeostasis exhibit a sex difference. Investigators often use male rodents because they show metabolic disease better than females. Thus, females are not used precisely because of an acknowledged sex difference that represents an opportunity to understand novel factors reducing metabolic disease more in one sex than the other. The National Institutes of Health (NIH) mandate to consider sex as a biological variable in preclinical research places new demands on investigators and peer reviewers who often lack expertise in model systems and experimental paradigms used in the study of sex differences. This Perspective discusses experimental design and interpretation in studies addressing the mechanisms of sex differences in metabolic homeostasis and disease, using animal models and cells. We also highlight current limitations in research tools and attitudes that threaten to delay progress in studies of sex differences in basic animal research. Copyright © 2017 Elsevier Inc. All rights reserved.
Broadband Lidar Technique for Precision CO2 Measurement
NASA Technical Reports Server (NTRS)
Heaps, William S.
2008-01-01
Presented are preliminary experimental results, sensitivity measurements and discuss our new CO2 lidar system under development. The system is employing an erbium-doped fiber amplifier (EDFA), superluminescent light emitting diode (SLED) as a source and our previously developed Fabry-Perot interferometer subsystem as a detector part. Global measurement of carbon dioxide column with the aim of discovering and quantifying unknown sources and sinks has been a high priority for the last decade. The goal of Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission is to significantly enhance the understanding of the role of CO2 in the global carbon cycle. The National Academy of Sciences recommended in its decadal survey that NASA put in orbit a CO2 lidar to satisfy this long standing need. Existing passive sensors suffer from two shortcomings. Their measurement precision can be compromised by the path length uncertainties arising from scattering within the atmosphere. Also passive sensors using sunlight cannot observe the column at night. Both of these difficulties can be ameliorated by lidar techniques. Lidar systems present their own set of problems however. Temperature changes in the atmosphere alter the cross section for individual CO2 absorption features while the different atmospheric pressures encountered passing through the atmosphere broaden the absorption lines. Currently proposed lidars require multiple lasers operating at multiple wavelengths simultaneously in order to untangle these effects. The current goal is to develop an ultra precise, inexpensive new lidar system for precise column measurements of CO2 changes in the lower atmosphere that uses a Fabry-Perot interferometer based system as the detector portion of the instrument and replaces the narrow band laser commonly used in lidars with the newly available high power SLED as the source. This approach reduces the number of individual lasers used in the system from three or more to one - considerably reducing the risk of failure. It also tremendously reduces the requirement for wavelength stability in the source putting this responsibility instead on the Fabry-Perot subsystem.
Query-by-example surgical activity detection.
Gao, Yixin; Vedula, S Swaroop; Lee, Gyusung I; Lee, Mija R; Khudanpur, Sanjeev; Hager, Gregory D
2016-06-01
Easy acquisition of surgical data opens many opportunities to automate skill evaluation and teaching. Current technology to search tool motion data for surgical activity segments of interest is limited by the need for manual pre-processing, which can be prohibitive at scale. We developed a content-based information retrieval method, query-by-example (QBE), to automatically detect activity segments within surgical data recordings of long duration that match a query. The example segment of interest (query) and the surgical data recording (target trial) are time series of kinematics. Our approach includes an unsupervised feature learning module using a stacked denoising autoencoder (SDAE), two scoring modules based on asymmetric subsequence dynamic time warping (AS-DTW) and template matching, respectively, and a detection module. A distance matrix of the query against the trial is computed using the SDAE features, followed by AS-DTW combined with template scoring, to generate a ranked list of candidate subsequences (substrings). To evaluate the quality of the ranked list against the ground-truth, thresholding conventional DTW distances and bipartite matching are applied. We computed the recall, precision, F1-score, and a Jaccard index-based score on three experimental setups. We evaluated our QBE method using a suture throw maneuver as the query, on two tool motion datasets (JIGSAWS and MISTIC-SL) captured in a training laboratory. We observed a recall of 93, 90 and 87 % and a precision of 93, 91, and 88 % with same surgeon same trial (SSST), same surgeon different trial (SSDT) and different surgeon (DS) experiment setups on JIGSAWS, and a recall of 87, 81 and 75 % and a precision of 72, 61, and 53 % with SSST, SSDT and DS experiment setups on MISTIC-SL, respectively. We developed a novel, content-based information retrieval method to automatically detect multiple instances of an activity within long surgical recordings. Our method demonstrated adequate recall across different complexity datasets and experimental conditions.
Next-generation Lunar Laser Retroreflectors for Precision Tests of General Relativity
NASA Astrophysics Data System (ADS)
Ciocci, Emanuele; dell'Agnello, Simone; Delle Monache, Giovanni; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Tibuzzi, Mattia; Salvatori, Lorenzo; Patrizi, Giordano; Maiello, Mauro; Intaglietta, Nicola; Mondaini, Chiara; Currie, Douglas; Chandler, John; Bianco, Giuseppe; Murphy, Tom
2016-04-01
Since 1969, Lunar Laser Ranging (LLR) to the Apollo Cube Corner Retroreflectors (CCRs) has supplied almost all significant tests of General Relativity (GR). When first installed in the 1970s, the Apollo CCRs geometry contributed only a negligible fraction of the ranging error budget. Today, because of lunar librations, this contribution dominates the error budget, limiting the precision of the experimental tests of gravitational theories. The new MoonLIGHT-2 (Moon Laser Instrumentation for General relativity High-accuracy Tests) apparatus is a new-generation LLR payload developed by the SCF_Lab (http://www.lnf.infn.it/esperimenti/etrusco/) at INFN-LNF in collaboration with the Maryland University. With the unique design of a single large CCR unaffected by librations, MoonLIGHT-2 can increase up to a factor 100 the precision of the measurement of the lunar geodetic precession and other General Relativity (GR) tests respect to Apollo CCRs. MoonLIGHT-2 is approved to be launched with the Moon Express mission MEX-1 and will be deployed on the Moon surface in 2018. MoonLIGHT-2 is also proposed for the Roscosmos mission Luna-27. To validate/optimize MoonLIGHT-2 for MEX-1, the SCF_Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. We perform test of GR with current LLR data and also different GR simulation of the expected improvement in GR test provided by MoonLIGHT-2, using the Planetary Ephemeris Program in collaboration with CfA. Our ultimate goal is to improve GR tests by a factor up to 100, and provide constraints on the new gravitational theories like non-miminally coupled gravity and spacetime torision.
ERIC Educational Resources Information Center
Reid, Robert L.; And Others
This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…
High-precision branching ratio measurement for the superallowed β+ emitter Ga62
NASA Astrophysics Data System (ADS)
Finlay, P.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Towner, I. S.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.
2008-08-01
A high-precision branching ratio measurement for the superallowed β+ decay of Ga62 was performed at the Isotope Separator and Accelerator (ISAC) radioactive ion beam facility. The 8π spectrometer, an array of 20 high-purity germanium detectors, was employed to detect the γ rays emitted following Gamow-Teller and nonanalog Fermi β+ decays of Ga62, and the SCEPTAR plastic scintillator array was used to detect the emitted β particles. Thirty γ rays were identified following Ga62 decay, establishing the superallowed branching ratio to be 99.858(8)%. Combined with the world-average half-life and a recent high-precision Q-value measurement for Ga62, this branching ratio yields an ft value of 3074.3±1.1 s, making Ga62 among the most precisely determined superallowed ft values. Comparison between the superallowed ft value determined in this work and the world-average corrected F tmacr value allows the large nuclear-structure-dependent correction for Ga62 decay to be experimentally determined from the CVC hypothesis to better than 7% of its own value, the most precise experimental determination for any superallowed emitter. These results provide a benchmark for the refinement of the theoretical description of isospin-symmetry breaking in A⩾62 superallowed decays.
Quantum Hall effect with small numbers of vortices in Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Byrnes, Tim; Dowling, Jonathan P.
2015-08-01
When vortices are displaced in Bose-Einstein condensates (BECs), the Magnus force gives the system a momentum transverse in the direction to the displacement. We show that BECs in long channels with vortices exhibit a quantization of the current response with respect to the spatial vortex distribution. The quantization originates from the well-known topological property of the phase around a vortex; it is an integer multiple of 2 π . In a way similar to that of the integer quantum Hall effect, the current along the channel is related to this topological phase and can be extracted from two experimentally measurable quantities: the total momentum of the BEC and the spatial distribution. The quantization is in units of m /2 h , where m is the mass of the atoms and h is Planck's constant. We derive an exact vortex momentum-displacement relation for BECs in long channels under general circumstances. Our results present the possibility that the configuration described here can be used as a novel way of measuring the mass of the atoms in the BEC using a topological invariant of the system. If an accurate determination of the plateaus are experimentally possible, this gives the possibility of a topological quantum mass standard and precise determination of the fine structure constant.
SHORT COMMUNICATION: Time measurement device with four femtosecond stability
NASA Astrophysics Data System (ADS)
Panek, Petr; Prochazka, Ivan; Kodet, Jan
2010-10-01
We present the experimental results of extremely precise timing in the sense of time-of-arrival measurements in a local time scale. The timing device designed and constructed in our laboratory is based on a new concept using a surface acoustic wave filter as a time interpolator. Construction of the device is briefly described. The experiments described were focused on evaluating the timing precision and stability. Low-jitter test pulses with a repetition frequency of 763 Hz were generated synchronously to the local time base and their times of arrival were measured. The resulting precision of a single measurement was typically 900 fs RMS, and a timing stability TDEV of 4 fs was achieved for time intervals in the range from 300 s to 2 h. To our knowledge this is the best value reported to date for the stability of a timing device. The experimental results are discussed and possible improvements are proposed.
Spherical subjective refraction with a novel 3D virtual reality based system.
Pujol, Jaume; Ondategui-Parra, Juan Carlos; Badiella, Llorenç; Otero, Carles; Vilaseca, Meritxell; Aldaba, Mikel
To conduct a clinical validation of a virtual reality-based experimental system that is able to assess the spherical subjective refraction simplifying the methodology of ocular refraction. For the agreement assessment, spherical refraction measurements were obtained from 104 eyes of 52 subjects using three different methods: subjectively with the experimental prototype (Subj.E) and the classical subjective refraction (Subj.C); and objectively with the WAM-5500 autorefractor (WAM). To evaluate precision (intra- and inter-observer variability) of each refractive tool independently, 26 eyes were measured in four occasions. With regard to agreement, the mean difference (±SD) for the spherical equivalent (M) between the new experimental subjective method (Subj.E) and the classical subjective refraction (Subj.C) was -0.034D (±0.454D). The corresponding 95% Limits of Agreement (LoA) were (-0.856D, 0.924D). In relation to precision, intra-observer mean difference for the M component was 0.034±0.195D for the Subj.C, 0.015±0.177D for the WAM and 0.072±0.197D for the Subj.E. Inter-observer variability showed worse precision values, although still clinically valid (below 0.25D) in all instruments. The spherical equivalent obtained with the new experimental system was precise and in good agreement with the classical subjective routine. The algorithm implemented in this new system and its optical configuration has been shown to be a first valid step for spherical error correction in a semiautomated way. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Modelling rogue waves through exact dynamical lump soliton controlled by ocean currents.
Kundu, Anjan; Mukherjee, Abhik; Naskar, Tapan
2014-04-08
Rogue waves are extraordinarily high and steep isolated waves, which appear suddenly in a calm sea and disappear equally fast. However, though the rogue waves are localized surface waves, their theoretical models and experimental observations are available mostly in one dimension, with the majority of them admitting only limited and fixed amplitude and modular inclination of the wave. We propose two dimensions, exactly solvable nonlinear Schrödinger (NLS) equation derivable from the basic hydrodynamic equations and endowed with integrable structures. The proposed two-dimensional equation exhibits modulation instability and frequency correction induced by the nonlinear effect, with a directional preference, all of which can be determined through precise analytic result. The two-dimensional NLS equation allows also an exact lump soliton which can model a full-grown surface rogue wave with adjustable height and modular inclination. The lump soliton under the influence of an ocean current appears and disappears preceded by a hole state, with its dynamics controlled by the current term. These desirable properties make our exact model promising for describing ocean rogue waves.
Modelling rogue waves through exact dynamical lump soliton controlled by ocean currents
Kundu, Anjan; Mukherjee, Abhik; Naskar, Tapan
2014-01-01
Rogue waves are extraordinarily high and steep isolated waves, which appear suddenly in a calm sea and disappear equally fast. However, though the rogue waves are localized surface waves, their theoretical models and experimental observations are available mostly in one dimension, with the majority of them admitting only limited and fixed amplitude and modular inclination of the wave. We propose two dimensions, exactly solvable nonlinear Schrödinger (NLS) equation derivable from the basic hydrodynamic equations and endowed with integrable structures. The proposed two-dimensional equation exhibits modulation instability and frequency correction induced by the nonlinear effect, with a directional preference, all of which can be determined through precise analytic result. The two-dimensional NLS equation allows also an exact lump soliton which can model a full-grown surface rogue wave with adjustable height and modular inclination. The lump soliton under the influence of an ocean current appears and disappears preceded by a hole state, with its dynamics controlled by the current term. These desirable properties make our exact model promising for describing ocean rogue waves. PMID:24711719
Measurement of whole tire profile
NASA Astrophysics Data System (ADS)
Yang, Yongyue; Jiao, Wenguang
2010-08-01
In this paper, a precision measuring device is developed for obtaining characteristic curve of tire profile and its geometric parameters. It consists of a laser displacement measurement unit, a closed-loop precision two-dimensional coordinate table, a step motor control system and a fast data acquisition and analysis system. Based on the laser trigonometry, a data map of tire profile and coordinate values of all points can be obtained through corresponding data transformation. This device has a compact structure, a convenient control, a simple hardware circuit design and a high measurement precision. Experimental results indicate that measurement precision can meet the customer accuracy requirement of +/-0.02 mm.
High-precision multiband spectroscopy of ultracold fermions in a nonseparable optical lattice
NASA Astrophysics Data System (ADS)
Fläschner, Nick; Tarnowski, Matthias; Rem, Benno S.; Vogel, Dominik; Sengstock, Klaus; Weitenberg, Christof
2018-05-01
Spectroscopic tools are fundamental for the understanding of complex quantum systems. Here, we demonstrate high-precision multiband spectroscopy in a graphenelike lattice using ultracold fermionic atoms. From the measured band structure, we characterize the underlying lattice potential with a relative error of 1.2 ×10-3 . Such a precise characterization of complex lattice potentials is an important step towards precision measurements of quantum many-body systems. Furthermore, we explain the excitation strengths into different bands with a model and experimentally study their dependency on the symmetry of the perturbation operator. This insight suggests the excitation strengths as a suitable observable for interaction effects on the eigenstates.
Measuring the Spin Correlation of Nuclear Muon Capture in HELIUM-3.
NASA Astrophysics Data System (ADS)
McCracken, Dorothy Jill
1996-06-01
We have completed the first measurement of the spin correlation of nuclear muon capture in ^3 He: mu^- + ^3He to nu _{mu} + ^3 H. From this spin correlation, we can extract the induced pseudoscalar form factor, F_{ rm p}, of the weak charged nuclear current. This form factor is not well known experimentally. If nuclear muon capture were a purely leptonic weak interaction, the current would have no pseudoscalar coupling, and therefore F_{rm p} arises from QCD contributions. Since ^3He is a fairly well understood system, a precise measurement of F_{rm p} could provide a direct test of the theories which describe QCD at low energies. This experiment was performed at TRIUMF in Vancouver, BC, using a muon beam. We stopped unpolarized muons in a laser polarized target filled with ^3 He and Rb vapor. The muons were captured into atomic orbitals, forming muonic ^3He which was then polarized via collisions with the optically pumped Rb vapor. When polarized muons undergo nuclear capture in ^3He, the total capture rate is proportional to (1 + {rm A_ {v}P_{v}cos} theta) where theta is the angle between the muon polarization and the triton recoil direction, P_{rm v} is the muon vector polarization and A_ {rm v} is the vector analyzing power. The partially conserved axial current hypothesis (PCAC) predicts that A_{rm v} = 0.524 +/- 0.006 Our measurement of A_{rm v} is in agreement with this prediction: A_{rm v } = 0.604 +/- 0.093 (stat.) _sp{-.142}{+.112}(syst.). This thesis will describe the design, construction, and operation of the device which simultaneously served as a polarized target and a gridded ion chamber. The ion chamber apparatus enabled us to identify recoil tritons as well as determine their direction of motion. The directional information was obtained by fitting the shapes of the pulses generated by the tritons. In addition, this thesis will describe in detail the analysis of these pulses which resulted in a measurement of the raw forward/backward asymmetry of the triton recoil direction. This asymmetry was measured to a precision of 11.5%. With the techniques employed in this experiment, a clear path exists to obtaining a precise measurement of the induced pseudoscalar coupling of the charged weak nuclear current. Plans for a future run, in which we will improve upon these techniques, are underway.
2017-01-01
Computational screening is a method to prioritize small-molecule compounds based on the structural and biochemical attributes built from ligand and target information. Previously, we have developed a scalable virtual screening workflow to identify novel multitarget kinase/bromodomain inhibitors. In the current study, we identified several novel N-[3-(2-oxo-pyrrolidinyl)phenyl]-benzenesulfonamide derivatives that scored highly in our ensemble docking protocol. We quantified the binding affinity of these compounds for BRD4(BD1) biochemically and generated cocrystal structures, which were deposited in the Protein Data Bank. As the docking poses obtained in the virtual screening pipeline did not align with the experimental cocrystal structures, we evaluated the predictions of their precise binding modes by performing molecular dynamics (MD) simulations. The MD simulations closely reproduced the experimentally observed protein–ligand cocrystal binding conformations and interactions for all compounds. These results suggest a computational workflow to generate experimental-quality protein–ligand binding models, overcoming limitations of docking results due to receptor flexibility and incomplete sampling, as a useful starting point for the structure-based lead optimization of novel BRD4(BD1) inhibitors. PMID:28884163
NASA Astrophysics Data System (ADS)
Wang, Bao-Zong; Lu, Yue-Hui; Sun, Wei; Chen, Shuai; Deng, Youjin; Liu, Xiong-Jun
2018-01-01
We propose a hierarchy set of minimal optical Raman lattice schemes to pave the way for experimental realization of high-dimensional spin-orbit (SO) couplings for ultracold atoms, including two-dimensional (2D) Dirac type, 2D Rashba type, and three-dimensional (3D) Weyl type. The proposed Dirac-type SO coupling exhibits precisely controllable high symmetry, for which a large topological phase region is predicted. The generation of 2D Rashba and 3D Weyl types requires that two sources of laser beams have distinct frequencies of factor 2 difference. Surprisingly, we find that 133Cs atoms provide an ideal candidate for the realization. A common and essential feature is of high controllability and absent of any fine-tuning in the realization, and the resulting SO coupled ultracold atoms have a long lifetime. In particular, a long-lived topological Bose gas of 2D Dirac SO coupling has been proved in the follow-up experiment. These schemes essentially improve over the current experimental accessibility and controllability, and open a realistic way to explore novel high-dimensional SO physics, particularly quantum many-body physics and quantum far-from-equilibrium dynamics with novel topology for ultracold atoms.
Precision investigations of nuclei and nucleons with the (e, e'. gamma. ) reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papanicolas, C.N.; Ammons, E.A.; Cardman, L.S.
1988-11-20
Recent theoretical and experimental investigations of the (e, e'..gamma..) reaction show that it provides a probe of unparalleled precision and selectivity. Experiments aimed towards the isolation of multipole form factors in mixed transitions, the study of continuum excitations in nuclei, and the measurement of the response of the proton are underway at several laboratories.
Precision Experiments with Ultraslow Muons
NASA Astrophysics Data System (ADS)
Mills, Allen P.
A source of ~105 ultraslow muons (USM) per second (~0.2 eV energy spread and 40 mm source diameter) reported by Miyake et al., and the demonstration of 100 K thermal muonium in vacuum by Antognini, et al., suggest possibilities for substantial improvements in the experimental precisions of the muonium 1S-2S interval and the muon g-2 measurements.
Toward precision medicine in primary biliary cholangitis.
Carbone, Marco; Ronca, Vincenzo; Bruno, Savino; Invernizzi, Pietro; Mells, George F
2016-08-01
Primary biliary cholangitis is a chronic, cholestatic liver disease characterized by a heterogeneous presentation, symptomatology, disease progression and response to therapy. In contrast, clinical management and treatment of PBC is homogeneous with a 'one size fits all' approach. The evolving research landscape, with the emergence of the -omics field and the availability of large patient cohorts are creating a unique opportunity of translational epidemiology. Furthermore, several novel disease and symptom-modifying agents for PBC are currently in development. The time is therefore ripe for precision medicine in PBC. In this manuscript we describe the concept of precision medicine; review current approaches to risk-stratification in PBC, and speculate how precision medicine in PBC might develop in the near future. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Ion current as a precise measure of the loading rate of a magneto-optical trap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, W.; Bailey, K.; Lu, Z. -T.
2014-01-01
We have demonstrated that the ion current resulting from collisions between metastable krypton atoms in a magneto-optical trap can be used to precisely measure the trap loading rate. We measured both the ion current of the abundant isotope Kr-83 (isotopic abundance = 11%) and the single-atom counting rate of the rare isotope Kr-85 (isotopic abundance similar to 1 x 10(-11)), and found the two quantities to be proportional at a precision level of 0.9%. This work results in a significant improvement in using the magneto-optical trap as an analytical tool for noble-gas isotope ratio measurements, and will benefit both atomicmore » physics studies and applications in the earth sciences. (C) 2014 Optical Society of America« less
Femtosecond laser cataract surgery: technology and clinical practice.
Roberts, Timothy V; Lawless, Michael; Chan, Colin Ck; Jacobs, Mark; Ng, David; Bali, Shveta J; Hodge, Chris; Sutton, Gerard
2013-03-01
The recent introduction of femtosecond lasers to cataract surgery has generated much interest among ophthalmologists around the world. Laser cataract surgery integrates high-resolution anterior segment imaging systems with a femtosecond laser, allowing key steps of the procedure, including the primary and side-port corneal incisions, the anterior capsulotomy and fragmentation of the lens nucleus, to be performed with computer-guided laser precision. There is emerging evidence of reduced phacoemulsification time, better wound architecture and a more stable refractive result with femtosecond cataract surgery, as well as reports documenting an initial learning curve. This article will review the current state of technology and discuss our clinical experience. © 2012 The Authors. Clinical and Experimental Ophthalmology © 2012 Royal Australian and New Zealand College of Ophthalmologists.
Six-quark decays of the Higgs boson in supersymmetry with R-parity violation.
Carpenter, Linda M; Kaplan, David E; Rhee, Eun-Jung
2007-11-23
Both electroweak precision measurements and simple supersymmetric extensions of the standard model prefer a mass of the Higgs boson less than the experimental lower limit (on a standard-model-like Higgs boson) of 114 GeV. We show that supersymmetric models with R parity violation and baryon-number violation have a significant range of parameter space in which the Higgs boson dominantly decays to six jets. These decays are much more weakly constrained by current CERN LEP analyses and would allow for a Higgs boson mass near that of the Z. In general, lighter scalar quark and other superpartner masses are allowed. The Higgs boson would potentially be discovered at hadron colliders via the appearance of new displaced vertices.
NASA Astrophysics Data System (ADS)
Chen, Jian; Li, Peng; Song, Gangbing; Ren, Zhang
2017-01-01
The design of a super-capacitor-powered shape-memory-alloy (SMA) actuated accumulator for blowout preventer (BOP) presented in this paper featured several advantages over conventional hydraulic accumulators including instant large current drive, quick system response and elimination of need for the pressure conduits. However, the mechanical design introduced two challenges, the nonlinear nature of SMA actuators and the varying voltage provided by a super capacitor, for control system design. A cerebellar model articulation controller (CMAC) feedforward plus PID controller was developed with the aim of compensation for these adverse effects. Experiments were conducted on a scaled down model and experimental results show that precision control can be achieved with the proposed configurations and algorithms.
Hancock, R
2018-04-01
The view of the cell nucleus as a crowded system of colloid particles and that chromosomes are giant self-avoiding polymers is stimulating rapid advances in our understanding of its structure and activities, thanks to concepts and experimental methods from colloid, polymer, soft matter, and nano sciences and to increased computational power for simulating macromolecules and polymers. This review summarizes current understanding of some characteristics of the molecular environment in the nucleus, of how intranuclear compartments are formed, and of how the genome is highly but precisely compacted, and underlines the crucial, subtle, and sometimes unintuitive effects on structures and reactions of entropic forces caused by the high concentration of macromolecules in the nucleus.
Allowing for crystalline structure effects in Geant4
Bagli, Enrico; Asai, Makoto; Dotti, Andrea; ...
2017-03-24
In recent years, the Geant4 toolkit for the Monte Carlo simulation of radiation with matter has seen large growth in its divers user community. A fundamental aspect of a successful physics experiment is the availability of a reliable and precise simulation code. Geant4 currently does not allow for the simulation of particle interactions with anything other than amorphous matter. To overcome this limitation, the GECO (GEant4 Crystal Objects) project developed a general framework for managing solid-state structures in the Geant4 kernel and validate it against experimental data. As a result, accounting for detailed geometrical structures allows, for example, simulation ofmore » diffraction from crystal planes or the channeling of charged particle.« less
NASA Astrophysics Data System (ADS)
Wang, Shilong; Yin, Changchun; Lin, Jun; Yang, Yu; Hu, Xueyan
2016-03-01
Cooperative work of multiple magnetic transmitting sources is a new trend in the development of transient electromagnetic system. The key is the bipolar current waves shutdown, concurrently in the inductive load. In the past, it was difficult to use the constant clamping voltage technique to realize the synchronized shutdown of currents with different peak values. Based on clamping voltage technique, we introduce a new controlling method with constant shutdown time. We use the rising time to control shutdown time and use low voltage power source to control peak current. From the viewpoint of the circuit energy loss, by taking the high-voltage capacitor bypass resistance and the capacitor of the passive snubber circuit into account, we establish the relationship between the rising time and the shutdown time. Since the switch is not ideal, we propose a new method to test the shutdown time by the low voltage, the high voltage and the peak current. Experimental results show that adjustment of the current rising time can precisely control the value of the clamp voltage. When the rising time is fixed, the shutdown time is unchanged. The error for shutdown time deduced from the energy consumption is less than 6%. The new controlling method on current shutdown proposed in this paper can be used in the cooperative work of borehole and ground transmitting system.
Wang, Shilong; Yin, Changchun; Lin, Jun; Yang, Yu; Hu, Xueyan
2016-03-01
Cooperative work of multiple magnetic transmitting sources is a new trend in the development of transient electromagnetic system. The key is the bipolar current waves shutdown, concurrently in the inductive load. In the past, it was difficult to use the constant clamping voltage technique to realize the synchronized shutdown of currents with different peak values. Based on clamping voltage technique, we introduce a new controlling method with constant shutdown time. We use the rising time to control shutdown time and use low voltage power source to control peak current. From the viewpoint of the circuit energy loss, by taking the high-voltage capacitor bypass resistance and the capacitor of the passive snubber circuit into account, we establish the relationship between the rising time and the shutdown time. Since the switch is not ideal, we propose a new method to test the shutdown time by the low voltage, the high voltage and the peak current. Experimental results show that adjustment of the current rising time can precisely control the value of the clamp voltage. When the rising time is fixed, the shutdown time is unchanged. The error for shutdown time deduced from the energy consumption is less than 6%. The new controlling method on current shutdown proposed in this paper can be used in the cooperative work of borehole and ground transmitting system.
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Lee, Jae-Won; Lim, Se-Ho; Kim, Moon-Key; Kang, Sang-Hoon
2015-12-01
We examined the precision of a computer-aided design/computer-aided manufacturing-engineered, manufactured, facebow-based surgical guide template (facebow wafer) by comparing it with a bite splint-type orthognathic computer-aided design/computer-aided manufacturing-engineered surgical guide template (bite wafer). We used 24 rapid prototyping (RP) models of the craniofacial skeleton with maxillary deformities. Twelve RP models each were used for the facebow wafer group and the bite wafer group (experimental group). Experimental maxillary orthognathic surgery was performed on the RP models of both groups. Errors were evaluated through comparisons with surgical simulations. We measured the minimum distances from 3 planes of reference to determine the vertical, lateral, and anteroposterior errors at specific measurement points. The measured errors were compared between experimental groups using a t test. There were significant intergroup differences in the lateral error when we compared the absolute values of the 3-D linear distance, as well as vertical, lateral, and anteroposterior errors between experimental groups. The bite wafer method exhibited little lateral error overall and little error in the anterior tooth region. The facebow wafer method exhibited very little vertical error in the posterior molar region. The clinical precision of the facebow wafer method did not significantly exceed that of the bite wafer method. Copyright © 2015 Elsevier Inc. All rights reserved.
Nuclear astrophysics in the laboratory and in the universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Champagne, A. E., E-mail: artc@physics.unc.edu; Iliadis, C.; Longland, R.
Nuclear processes drive stellar evolution and so nuclear physics, stellar models and observations together allow us to describe the inner workings of stars and their life stories. This Information on nuclear reaction rates and nuclear properties are critical ingredients in addressing most questions in astrophysics and often the nuclear database is incomplete or lacking the needed precision. Direct measurements of astrophysically-interesting reactions are necessary and the experimental focus is on improving both sensitivity and precision. In the following, we review recent results and approaches taken at the Laboratory for Experimental Nuclear Astrophysics (LENA, http://research.physics.unc.edu/project/nuclearastro/Welcome.html )
Astrophysical S-factor for destructive reactions of lithium-7 in big bang nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komatsubara, Tetsuro; Kwon, YoungKwan; Moon, JunYoung
One of the most prominent success with the Big Bang models is the precise reproduction of mass abundance ratio for {sup 4}He. In spite of the success, abundances of lithium isotopes are still inconsistent between observations and their calculated results, which is known as lithium abundance problem. Since the calculations were based on the experimental reaction data together with theoretical estimations, more precise experimental measurements may improve the knowledge of the Big Bang nucleosynthesis. As one of the destruction process of lithium-7, we have performed measurements for the reaction cross sections of the {sup 7}L({sup 3}He,p){sup 9}Be reaction.
NASA Astrophysics Data System (ADS)
Majumder, Tiku
2017-04-01
In recent decades, substantial experimental effort has centered on heavy (high-Z) atomic and molecular systems for atomic-physics-based tests of standard model physics, through (for example) measurements of atomic parity nonconservation and searches for permanent electric dipole moments. In all of this work, a crucial role is played by atomic theorists, whose accurate wave function calculations are essential in connecting experimental observables to tests of relevant fundamental physics parameters. At Williams College, with essential contributions from dozens of undergraduate students, we have pursued a series of precise atomic structure measurements in heavy metal atoms such as thallium, indium, and lead. These include measurements of hyperfine structure, transition amplitudes, and atomic polarizability. This work, involving diode lasers, heated vapor cells, and an atomic beam apparatus, has both tested the accuracy and helped guide the refinement of new atomic theory calculations. I will discuss a number of our recent experimental results, emphasizing the role played by students and the opportunities that have been afforded for research-training in this undergraduate environment. Work supported by Research Corporation, the NIST Precision Measurement Grants program, and the National Science Foundation.
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
NASA Technical Reports Server (NTRS)
Barton, Richard J.; Ni, David; Ngo, Phong
2010-01-01
Several prototype ultra-wideband (UWB) impulse-radio (IR) tracking systems are currently under development at NASA Johnson Space Center (JSC). These systems are being studied for use in tracking of Lunar/Mars rovers and astronauts during early exploration missions when satellite navigation systems (such as GPS) are not available. To date, the systems that have been designed and tested are intended only for two-dimensional location and tracking, but these designs can all be extended to three-dimensional tracking with only minor modifications and increases in complexity. In this presentation, we will briefly review the design and performance of two of the current 2-D systems: one designed specifically for short-range, extremely high-precision tracking (approximately 1-2 cm resolution) and the other designed specifically for much longer range tracking with less stringent precision requirements (1-2 m resolution). We will then discuss a new multi-purpose system design based on a simple UWB-IR architecture that can be deployed easily on a planetary surface to support arbitrary three-dimensional localization and tracking applications. We will discuss utilization of this system as an infrastructure to provide both short-range and long-range tracking and analyze the localization performance of the system in several different configurations. We will give theoretical performance bounds for some canonical system configurations and compare these performance bounds with both numerical simulations of the system as well as actual experimental system performance evaluations.
Ozgenel, Mehmet Cihat; Bal, Gungor; Uygun, Durmus
2017-03-01
This study presents a precise speed control method for Brushless Direct Current (BLDC) Motors using an electronic tachogenerator (ETg) instead of an electro-mechanical tachogenerator. Most commonly used three-phase BLDC motors have three position sensors for rotor position data to provide commutation among stator windings. Aforementioned position sensors are usually Hall-effect sensors delivering binary-high and binary-low data as long as the motor rotates. These binary sets from three Hall-effect sensors can be used as an analogue rotor speed signal for closed loop applications. Each position sensor signal is apart from 120 electrical degrees. By using an electronic circuitry, a combination of position sensor signals is converted to the analogue signal providing an input to a PI speed controller. To implement this, a frequency to voltage converter has been used in this study. Then, the analogue speed signal has been evaluated as rotor speed data in comparison with the reference speed. So, an ETg system has been successfully achieved in place of an electro-mechanical tachogenerator for BLDC motor speed control. The proposed ETg has been tested under various speed conditions on an experimental setup. Employed tests and obtained results show that the proposed low-cost speed feedback sub-system can be effectively used in BLDC motor drive systems. Through the proved method and designed sub-system, a new motor controller chip with a speed feedback capability has been aimed.
NASA Astrophysics Data System (ADS)
Ozgenel, Mehmet Cihat; Bal, Gungor; Uygun, Durmus
2017-03-01
This study presents a precise speed control method for Brushless Direct Current (BLDC) Motors using an electronic tachogenerator (ETg) instead of an electro-mechanical tachogenerator. Most commonly used three-phase BLDC motors have three position sensors for rotor position data to provide commutation among stator windings. Aforementioned position sensors are usually Hall-effect sensors delivering binary-high and binary-low data as long as the motor rotates. These binary sets from three Hall-effect sensors can be used as an analogue rotor speed signal for closed loop applications. Each position sensor signal is apart from 120 electrical degrees. By using an electronic circuitry, a combination of position sensor signals is converted to the analogue signal providing an input to a PI speed controller. To implement this, a frequency to voltage converter has been used in this study. Then, the analogue speed signal has been evaluated as rotor speed data in comparison with the reference speed. So, an ETg system has been successfully achieved in place of an electro-mechanical tachogenerator for BLDC motor speed control. The proposed ETg has been tested under various speed conditions on an experimental setup. Employed tests and obtained results show that the proposed low-cost speed feedback sub-system can be effectively used in BLDC motor drive systems. Through the proved method and designed sub-system, a new motor controller chip with a speed feedback capability has been aimed.
[Progress in precision medicine: a scientific perspective].
Wang, B; Li, L M
2017-01-10
Precision medicine is a new strategy for disease prevention and treatment by taking into account differences in genetics, environment and lifestyles among individuals and making precise diseases classification and diagnosis, which can provide patients with personalized, targeted prevention and treatment. Large-scale population cohort studies are fundamental for precision medicine research, and could produce best evidence for precision medicine practices. Current criticisms on precision medicine mainly focus on the very small proportion of benefited patients, the neglect of social determinants for health, and the possible waste of limited medical resources. In spite of this, precision medicine is still a most hopeful research area, and would become a health care practice model in the future.
Mohan, Shalini V; Chang, Anne Lynn S
2014-06-01
Precision medicine and precision therapeutics is currently in its infancy with tremendous potential to improve patient care by better identifying individuals at risk for skin cancer and predict tumor responses to treatment. This review focuses on the Hedgehog signaling pathway, its critical role in the pathogenesis of basal cell carcinoma, and the emergence of targeted treatments for advanced basal cell carcinoma. Opportunities to utilize precision medicine are outlined, such as molecular profiling to predict basal cell carcinoma response to targeted therapy and to inform therapeutic decisions.
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.
NASA Astrophysics Data System (ADS)
Krogh-Madsen, Trine; Kold Taylor, Louise; Skriver, Anne D.; Schaffer, Peter; Guevara, Michael R.
2017-09-01
The transmembrane potential is recorded from small isopotential clusters of 2-4 embryonic chick ventricular cells spontaneously generating action potentials. We analyze the cycle-to-cycle fluctuations in the time between successive action potentials (the interbeat interval or IBI). We also convert an existing model of electrical activity in the cluster, which is formulated as a Hodgkin-Huxley-like deterministic system of nonlinear ordinary differential equations describing five individual ionic currents, into a stochastic model consisting of a population of ˜20 000 independently and randomly gating ionic channels, with the randomness being set by a real physical stochastic process (radio static). This stochastic model, implemented using the Clay-DeFelice algorithm, reproduces the fluctuations seen experimentally: e.g., the coefficient of variation (standard deviation/mean) of IBI is 4.3% in the model vs. the 3.9% average value of the 17 clusters studied. The model also replicates all but one of several other quantitative measures of the experimental results, including the power spectrum and correlation integral of the voltage, as well as the histogram, Poincaré plot, serial correlation coefficients, power spectrum, detrended fluctuation analysis, approximate entropy, and sample entropy of IBI. The channel noise from one particular ionic current (IKs), which has channel kinetics that are relatively slow compared to that of the other currents, makes the major contribution to the fluctuations in IBI. Reproduction of the experimental coefficient of variation of IBI by adding a Gaussian white noise-current into the deterministic model necessitates using an unrealistically high noise-current amplitude. Indeed, a major implication of the modelling results is that, given the wide range of time-scales over which the various species of channels open and close, only a cell-specific stochastic model that is formulated taking into consideration the widely different ranges in the frequency content of the channel-noise produced by the opening and closing of several different types of channels will be able to reproduce precisely the various effects due to membrane noise seen in a particular electrophysiological preparation.
How precise are reported protein coordinate data?
Konagurthu, Arun S; Allison, Lloyd; Abramson, David; Stuckey, Peter J; Lesk, Arthur M
2014-03-01
Atomic coordinates in the Worldwide Protein Data Bank (wwPDB) are generally reported to greater precision than the experimental structure determinations have actually achieved. By using information theory and data compression to study the compressibility of protein atomic coordinates, it is possible to quantify the amount of randomness in the coordinate data and thereby to determine the realistic precision of the reported coordinates. On average, the value of each C(α) coordinate in a set of selected protein structures solved at a variety of resolutions is good to about 0.1 Å.
Precision phase estimation based on weak-value amplification
NASA Astrophysics Data System (ADS)
Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei
2017-02-01
In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Elliott, Mark A; du Bois, Naomi
2017-01-01
From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data measuring sensitivity to visual information presented to the visual blind field (blindsight), as well as from studies of temporal processing in autism and schizophrenia, indicates that an understanding of a precise and metrical dynamic structure may be very important for an operational understanding of perception as well as more general cognitive function in psychopathology.
Chen, Guoli; Yang, Zhaohai; Eshleman, James R; Netto, George J; Lin, Ming-Tseh
2016-01-01
Precision medicine, a concept that has recently emerged and has been widely discussed, emphasizes tailoring medical care to individuals largely based on information acquired from molecular diagnostic testing. As a vital aspect of precision cancer medicine, targeted therapy has been proven to be efficacious and less toxic for cancer treatment. Colorectal cancer (CRC) is one of the most common cancers and among the leading causes for cancer related deaths in the United States and worldwide. By far, CRC has been one of the most successful examples in the field of precision cancer medicine, applying molecular tests to guide targeted therapy. In this review, we summarize the current guidelines for anti-EGFR therapy, revisit the roles of pathologists in an era of precision cancer medicine, demonstrate the transition from traditional "one test-one drug" assays to multiplex assays, especially by using next-generation sequencing platforms in the clinical diagnostic laboratories, and discuss the future perspectives of tumor heterogeneity associated with anti-EGFR resistance and immune checkpoint blockage therapy in CRC.
Prospects for Precision Neutrino Cross Section Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.
2016-01-28
The need for precision cross section measurements is more urgent now than ever before, given the central role neutrino oscillation measurements play in the field of particle physics. The definition of precision is something worth considering, however. In order to build the best model for an oscillation experiment, cross section measurements should span a broad range of energies, neutrino interaction channels, and target nuclei. Precision might better be defined not in the final uncertainty associated with any one measurement but rather with the breadth of measurements that are available to constrain models. Current experience shows that models are better constrainedmore » by 10 measurements across different processes and energies with 10% uncertainties than by one measurement of one process on one nucleus with a 1% uncertainty. This article describes the current status of and future prospects for the field of precision cross section measurements considering the metric of how many processes, energies, and nuclei have been studied.« less
Precision measurements of linear scattering density using muon tomography
NASA Astrophysics Data System (ADS)
Åström, E.; Bonomi, G.; Calliari, I.; Calvini, P.; Checchia, P.; Donzella, A.; Faraci, E.; Forsberg, F.; Gonella, F.; Hu, X.; Klinger, J.; Sundqvist Ökvist, L.; Pagano, D.; Rigoni, A.; Ramous, E.; Urbani, M.; Vanini, S.; Zenoni, A.; Zumerle, G.
2016-07-01
We demonstrate that muon tomography can be used to precisely measure the properties of various materials. The materials which have been considered have been extracted from an experimental blast furnace, including carbon (coke) and iron oxides, for which measurements of the linear scattering density relative to the mass density have been performed with an absolute precision of 10%. We report the procedures that are used in order to obtain such precision, and a discussion is presented to address the expected performance of the technique when applied to heavier materials. The results we obtain do not depend on the specific type of material considered and therefore they can be extended to any application.
Decomposing the permeability spectra of nanocrystalline finemet core
NASA Astrophysics Data System (ADS)
Varga, Lajos K.; Kovac, Jozef
2018-04-01
In this paper we present a theoretical and experimental investigation on the magnetization contributions to permeability spectra of normal annealed Finemet core with round type hysteresis curve. Real and imaginary parts of the permeability were determined as a function of exciting magnetic field (HAC) between 40 Hz -110 MHz using an Agilent 4294A type Precision Impedance Analyzer. The amplitude of the exciting field was below and around the coercive field of the sample. The spectra were decomposed using the Levenberg-Marquardt algorithm running under Origin 9 software in four contributions: i) eddy current; ii) Debye relaxation of magnetization rotation, iii) Debye relaxation of damped domain wall motion and iv) resonant type DW motion. For small exciting amplitudes the first two components dominate. The last two contributions connected to the DW appear for relative large HAC only, around the coercive force. All the contributions will be discussed in detail accentuating the role of eddy current that is not negligible even for the smallest applied exciting field.
NASA Astrophysics Data System (ADS)
Larsson, Anders; Gustavsson, Johan S.
The only active transverse mode in a truly single-mode VCSEL is the fundamental mode with a near Gaussian field distribution. A single-mode VCSEL produces a light beam of higher spectral purity, higher degree of coherence and lower divergence than a multimode VCSEL and the beam can be more precisely shaped and focused to a smaller spot. Such beam properties are required in many applications. In this chapter, after discussing applications of single-mode VCSELs, we introduce the basics of fields and modes in VCSELs and review designs implemented for single-mode emission from VCSELs in different materials and at different wavelengths. This includes VCSELs that are inherently single-mode as well as inherently multimode VCSELs where higher-order modes are suppressed by mode selective gain or loss. In each case we present the current state-of-the-art and discuss pros and cons. At the end, a specific example with experimental results is provided and, as a summary, the most promising designs based on current technologies are identified.
Giving cosmic redshift drift a whirl
NASA Astrophysics Data System (ADS)
Kim, Alex G.; Linder, Eric V.; Edelstein, Jerry; Erskine, David
2015-03-01
Redshift drift provides a direct kinematic measurement of cosmic acceleration but it occurs with a characteristic time scale of a Hubble time. Thus redshift observations with a challenging precision of 10-9 require a 10 year time span to obtain a signal-to-noise of 1. We discuss theoretical and experimental approaches to address this challenge, potentially requiring less observer time and having greater immunity to common systematics. On the theoretical side we explore allowing the universe, rather than the observer, to provide long time spans; speculative methods include radial baryon acoustic oscillations, cosmic pulsars, and strongly lensed quasars. On the experimental side, we explore beating down the redshift precision using differential interferometric techniques, including externally dispersed interferometers and spatial heterodyne spectroscopy. Low-redshift emission line galaxies are identified as having high cosmology leverage and systematics control, with an 8 h exposure on a 10-m telescope (1000 h of exposure on a 40-m telescope) potentially capable of measuring the redshift of a galaxy to a precision of 10-8 (few ×10-10). Low-redshift redshift drift also has very strong complementarity with cosmic microwave background measurements, with the combination achieving a dark energy figure of merit of nearly 300 (1400) for 5% (1%) precision on drift.
Recent Advances and Future Prospects in Fundamental Symmetries
NASA Astrophysics Data System (ADS)
Plaster, Brad
2017-09-01
A broad program of initiatives in fundamental symmetries seeks answers to several of the most pressing open questions in nuclear physics, ranging from the scale of the neutrino mass, to the particle-antiparticle nature of the neutrino, to the origin of the matter-antimatter asymmetry, to the limits of Standard Model interactions. Although the experimental program is quite broad, with efforts ranging from precision measurements of neutrino properties; to searches for electric dipole moments; to precision measurements of magnetic dipole moments; and to precision measurements of couplings, particle properties, and decays; all of these seemingly disparate initiatives are unified by several common threads. These include the use and exploitation of symmetry principles, novel cross-disciplinary experimental work at the forefront of the precision frontier, and the need for accompanying breakthroughs in development of the theory necessary for an interpretation of the anticipated results from these experiments. This talk will highlight recent accomplishments and advances in fundamental symmetries and point to the extraordinary level of ongoing activity aimed at realizing the development and interpretation of next-generation experiments. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Number DE-SC-0014622.
Alternative, Green Processes for the Precision Cleaning of Aerospace Hardware
NASA Technical Reports Server (NTRS)
Maloney, Phillip R.; Grandelli, Heather Eilenfield; Devor, Robert; Hintze, Paul E.; Loftin, Kathleen B.; Tomlin, Douglas J.
2014-01-01
Precision cleaning is necessary to ensure the proper functioning of aerospace hardware, particularly those systems that come in contact with liquid oxygen or hypergolic fuels. Components that have not been cleaned to the appropriate levels may experience problems ranging from impaired performance to catastrophic failure. Traditionally, this has been achieved using various halogenated solvents. However, as information on the toxicological and/or environmental impacts of each came to light, they were subsequently regulated out of use. The solvent currently used in Kennedy Space Center (KSC) precision cleaning operations is Vertrel MCA. Environmental sampling at KSC indicates that continued use of this or similar solvents may lead to high remediation costs that must be borne by the Program for years to come. In response to this problem, the Green Solvents Project seeks to develop state-of-the-art, green technologies designed to meet KSCs precision cleaning needs.Initially, 23 solvents were identified as potential replacements for the current Vertrel MCA-based process. Highly halogenated solvents were deliberately omitted since historical precedents indicate that as the long-term consequences of these solvents become known, they will eventually be regulated out of practical use, often with significant financial burdens for the user. Three solvent-less cleaning processes (plasma, supercritical carbon dioxide, and carbon dioxide snow) were also chosen since they produce essentially no waste stream. Next, experimental and analytical procedures were developed to compare the relative effectiveness of these solvents and technologies to the current KSC standard of Vertrel MCA. Individually numbered Swagelok fittings were used to represent the hardware in the cleaning process. First, the fittings were cleaned using Vertrel MCA in order to determine their true cleaned mass. Next, the fittings were dipped into stock solutions of five commonly encountered contaminants and were weighed again showing typical contaminant deposition levels of approximately 0.00300g per part. They were then cleaned by the solvent or process being tested and then weighed a third time which allowed for the calculation of the cleaning efficiency of the test solvent or process.Based on preliminary experiments, five solvents (ethanol, isopropanol, acetone, ethyl acetate, and tert-butyl acetate) were down selected for further testing. When coupled with ultrasonic agitation, these solvents removed hydrocarbon contaminants as well as Vertrel MCA and showed improved removal of perfluorinated greases. Supercritical carbon dioxide did an excellent job dissolving each of the five contaminants but did a poor job of removing Teflon particles found in the perfluorinated greases. Plasma cleaning efficiency was found to be dependent on which supply gas was used, exposure time, and gas pressure. Under optimized conditions it was found that breathing air, energized to the plasma phase, was able to remove nearly 100% of the contamination.These findings indicate that alternative cleaning methods are indeed able to achieve precision levels of cleanliness. Currently, our team is working with a commercial cleaning company to get independent verification of our results. We are also evaluating the technical and financial aspects of scaling these processes to a size capable of supporting the future cleaning needs of KSC.
NASA Astrophysics Data System (ADS)
Cook, Eryn C.
Casimir and Casimir-Polder effects are forces between electrically neutral bodies and particles in vacuum, arising entirely from quantum fluctuations. The modification to the vacuum electromagnetic-field modes imposed by the presence of any particle or surface can result in these mechanical forces, which are often the dominant interaction at small separations. These effects play an increasingly critical role in the operation of micro- and nano-mechanical systems as well as miniaturized atomic traps for precision sensors and quantum-information devices. Despite their fundamental importance, calculations present theoretical and numeric challenges, and precise atom-surface potential measurements are lacking in many geometric and distance regimes. The spectroscopic measurement of Casimir-Polder-induced energy level shifts in optical-lattice trapped atoms offers a new experimental method to probe atom-surface interactions. Strontium, the current front-runner among optical frequency metrology systems, has demonstrated characteristics ideal for such precision measurements. An alkaline earth atom possessing ultra-narrow intercombination transitions, strontium can be loaded into an optical lattice at the "magic" wavelength where the probe transition is unperturbed by the trap light. Translation of the lattice will permit controlled transport of tightly-confined atomic samples to well-calibrated atom-surface separations, while optical transition shifts serve as a direct probe of the Casimir-Polder potential. We have constructed a strontium magneto-optical trap (MOT) for future Casimir-Polder experiments. This thesis will describe the strontium apparatus, initial trap performance, and some details of the proposed measurement procedure.
Andresen, G B; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Humphries, A J; Hydomako, R; Jenkins, M J; Jonsell, S; Jørgensen, L V; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; el Nasr, S Seif; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki, Y
2010-12-02
Antimatter was first predicted in 1931, by Dirac. Work with high-energy antiparticles is now commonplace, and anti-electrons are used regularly in the medical technique of positron emission tomography scanning. Antihydrogen, the bound state of an antiproton and a positron, has been produced at low energies at CERN (the European Organization for Nuclear Research) since 2002. Antihydrogen is of interest for use in a precision test of nature's fundamental symmetries. The charge conjugation/parity/time reversal (CPT) theorem, a crucial part of the foundation of the standard model of elementary particles and interactions, demands that hydrogen and antihydrogen have the same spectrum. Given the current experimental precision of measurements on the hydrogen atom (about two parts in 10(14) for the frequency of the 1s-to-2s transition), subjecting antihydrogen to rigorous spectroscopic examination would constitute a compelling, model-independent test of CPT. Antihydrogen could also be used to study the gravitational behaviour of antimatter. However, so far experiments have produced antihydrogen that is not confined, precluding detailed study of its structure. Here we demonstrate trapping of antihydrogen atoms. From the interaction of about 10(7) antiprotons and 7 × 10(8) positrons, we observed 38 annihilation events consistent with the controlled release of trapped antihydrogen from our magnetic trap; the measured background is 1.4 ± 1.4 events. This result opens the door to precision measurements on anti-atoms, which can soon be subjected to the same techniques as developed for hydrogen.
Lux, Robert L.; Sower, Christopher Todd; Allen, Nancy; Etheridge, Susan P.; Tristani-Firouzi, Martin; Saarel, Elizabeth V.
2014-01-01
Background Precise measurement of the QT interval is often hampered by difficulty determining the end of the low amplitude T wave. Root mean square electrocardiography (RMS ECG) provides a novel alternative measure of ventricular repolarization. Experimental data have shown that the interval between the RMS ECG QRS and T wave peaks (RTPK) closely reflects the mean ventricular action potential duration while the RMS T wave width (TW) tracks the dispersion of repolarization timing. Here, we tested the precision of RMS ECG to assess ventricular repolarization in humans in the setting of drug-induced and congenital Long QT Syndrome (LQTS). Methods RMS ECG signals were derived from high-resolution 24 hour Holter monitor recordings from 68 subjects after receiving placebo and moxifloxacin and from standard 12 lead ECGs obtained in 97 subjects with LQTS and 97 age- and sex-matched controls. RTPK, QTRMS and RMS TW intervals were automatically measured using custom software and compared to traditional QT measures using lead II. Results All measures of repolarization were prolonged during moxifloxacin administration and in LQTS subjects, but the variance of RMS intervals was significantly smaller than traditional lead II measurements. TW was prolonged during moxifloxacin and in subjects with LQT-2, but not LQT-1 or LQT-3. Conclusion These data validate the application of RMS ECG for the detection of drug-induced and congenital LQTS. RMS ECG measurements are more precise than the current standard of care lead II measurements. PMID:24454918
Lux, Robert L; Sower, Christopher Todd; Allen, Nancy; Etheridge, Susan P; Tristani-Firouzi, Martin; Saarel, Elizabeth V
2014-01-01
Precise measurement of the QT interval is often hampered by difficulty determining the end of the low amplitude T wave. Root mean square electrocardiography (RMS ECG) provides a novel alternative measure of ventricular repolarization. Experimental data have shown that the interval between the RMS ECG QRS and T wave peaks (RTPK) closely reflects the mean ventricular action potential duration while the RMS T wave width (TW) tracks the dispersion of repolarization timing. Here, we tested the precision of RMS ECG to assess ventricular repolarization in humans in the setting of drug-induced and congenital Long QT Syndrome (LQTS). RMS ECG signals were derived from high-resolution 24 hour Holter monitor recordings from 68 subjects after receiving placebo and moxifloxacin and from standard 12 lead ECGs obtained in 97 subjects with LQTS and 97 age- and sex-matched controls. RTPK, QTRMS and RMS TW intervals were automatically measured using custom software and compared to traditional QT measures using lead II. All measures of repolarization were prolonged during moxifloxacin administration and in LQTS subjects, but the variance of RMS intervals was significantly smaller than traditional lead II measurements. TW was prolonged during moxifloxacin and in subjects with LQT-2, but not LQT-1 or LQT-3. These data validate the application of RMS ECG for the detection of drug-induced and congenital LQTS. RMS ECG measurements are more precise than the current standard of care lead II measurements.
Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A.
2014-01-01
This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP–BOLD) MRI. CP–BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by (a) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and (b) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease. PMID:24691119
Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A
2014-07-01
This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease.
Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin
2017-03-01
The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.
Higgs mass from D-terms: a litmus test
NASA Astrophysics Data System (ADS)
Cheung, Clifford; Roberts, Hannes L.
2013-12-01
We explore supersymmetric theories in which the Higgs mass is boosted by the non-decoupling D-terms of an extended U(1) X gauge symmetry, defined here to be a general linear combination of hypercharge, baryon number, and lepton number. Crucially, the gauge coupling, g X , is bounded from below to accommodate the Higgs mass, while the quarks and leptons are required by gauge invariance to carry non-zero charge under U(1) X . This induces an irreducible rate, σBR, for pp → X → ℓℓ relevant to existing and future resonance searches, and gives rise to higher dimension operators that are stringently constrained by precision electroweak measurements. Combined, these bounds define a maximally allowed region in the space of observables, ( σBR, m X ), outside of which is excluded by naturalness and experimental limits. If natural supersymmetry utilizes non-decoupling D-terms, then the associated X boson can only be observed within this window, providing a model independent `litmus test' for this broad class of scenarios at the LHC. Comparing limits, we find that current LHC results only exclude regions in parameter space which were already disfavored by precision electroweak data.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Peptide Identification by Database Search of Mixture Tandem Mass Spectra*
Wang, Jian; Bourne, Philip E.; Bandeira, Nuno
2011-01-01
In high-throughput proteomics the development of computational methods and novel experimental strategies often rely on each other. In certain areas, mass spectrometry methods for data acquisition are ahead of computational methods to interpret the resulting tandem mass spectra. Particularly, although there are numerous situations in which a mixture tandem mass spectrum can contain fragment ions from two or more peptides, nearly all database search tools still make the assumption that each tandem mass spectrum comes from one peptide. Common examples include mixture spectra from co-eluting peptides in complex samples, spectra generated from data-independent acquisition methods, and spectra from peptides with complex post-translational modifications. We propose a new database search tool (MixDB) that is able to identify mixture tandem mass spectra from more than one peptide. We show that peptides can be reliably identified with up to 95% accuracy from mixture spectra while considering only a 0.01% of all possible peptide pairs (four orders of magnitude speedup). Comparison with current database search methods indicates that our approach has better or comparable sensitivity and precision at identifying single-peptide spectra while simultaneously being able to identify 38% more peptides from mixture spectra at significantly higher precision. PMID:21862760
Men and mice: Relating their ages.
Dutta, Sulagna; Sengupta, Pallav
2016-05-01
Since the late 18th century, the murine model has been widely used in biomedical research (about 59% of total animals used) as it is compact, cost-effective, and easily available, conserving almost 99% of human genes and physiologically resembling humans. Despite the similarities, mice have a diminutive lifespan compared to humans. In this study, we found that one human year is equivalent to nine mice days, although this is not the case when comparing the lifespan of mice versus humans taking the entire life at the same time without considering each phase separately. Therefore, the precise correlation of age at every point in their lifespan must be determined. Determining the age relation between mice and humans is necessary for setting up experimental murine models more analogous in age to humans. Thus, more accuracy can be obtained in the research outcome for humans of a specific age group, although current outcomes are based on mice of an approximate age. To fill this gap between approximation and accuracy, this review article is the first to establish a precise relation between mice age and human age, following our previous article, which explained the relation in ages of laboratory rats with humans in detail. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ding, Xiang; Li, Fei; Zhang, Jiyan; Liu, Wenli
2016-10-01
Raman spectrometers are usually calibrated periodically to ensure their measurement accuracy of Raman shift. A combination of a piece of monocrystalline silicon chip and a low pressure discharge lamp is proposed as a candidate for the reference standard of Raman shift. A high precision calibration technique is developed to accurately determine the standard value of the silicon's Raman shift around 520cm-1. The technique is described and illustrated by measuring a piece of silicon chip against three atomic spectral lines of a neon lamp. A commercial Raman spectrometer is employed and its error characteristics of Raman shift are investigated. Error sources are evaluated based on theoretical analysis and experiments, including the sample factor, the instrumental factor, the laser factor and random factors. Experimental results show that the expanded uncertainty of the silicon's Raman shift around 520cm-1 can acheive 0.3 cm-1 (k=2), which is more accurate than most of currently used reference materials. The results are validated by comparison measurement between three Raman spectrometers. It is proved that the technique can remarkably enhance the accuracy of Raman shift, making it possible to use the silicon and the lamp to calibrate Raman spectrometers.
Precision medicine needs pioneering clinical bioinformaticians.
Gómez-López, Gonzalo; Dopazo, Joaquín; Cigudosa, Juan C; Valencia, Alfonso; Al-Shahrour, Fátima
2017-10-25
Success in precision medicine depends on accessing high-quality genetic and molecular data from large, well-annotated patient cohorts that couple biological samples to comprehensive clinical data, which in conjunction can lead to effective therapies. From such a scenario emerges the need for a new professional profile, an expert bioinformatician with training in clinical areas who can make sense of multi-omics data to improve therapeutic interventions in patients, and the design of optimized basket trials. In this review, we first describe the main policies and international initiatives that focus on precision medicine. Secondly, we review the currently ongoing clinical trials in precision medicine, introducing the concept of 'precision bioinformatics', and we describe current pioneering bioinformatics efforts aimed at implementing tools and computational infrastructures for precision medicine in health institutions around the world. Thirdly, we discuss the challenges related to the clinical training of bioinformaticians, and the urgent need for computational specialists capable of assimilating medical terminologies and protocols to address real clinical questions. We also propose some skills required to carry out common tasks in clinical bioinformatics and some tips for emergent groups. Finally, we explore the future perspectives and the challenges faced by precision medicine bioinformatics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Measuring currents in submarine canyons: technological and scientific progress in the past 30 years
Xu, J. P.
2011-01-01
The development and application of acoustic and optical technologies and of accurate positioning systems in the past 30 years have opened new frontiers in the submarine canyon research communities. This paper reviews several key advancements in both technology and science in the field of currents in submarine canyons since the1979 publication of Currents in Submarine Canyons and Other Sea Valleys by Francis Shepard and colleagues. Precise placements of high-resolution, high-frequency instruments have not only allowed researchers to collect new data that are essential for advancing and generalizing theories governing the canyon currents, but have also revealed new natural phenomena that challenge the understandings of the theorists and experimenters in their predictions of submarine canyon flow fields. Baroclinic motions at tidal frequencies, found to be intensified both up canyon and toward the canyon floor, dominate the flow field and control the sediment transport processes in submarine canyons. Turbidity currents are found to frequently occur in active submarine canyons such as Monterey Canyon. These turbidity currents have maximum speeds of nearly 200 cm/s, much smaller than the speeds of turbidity currents in geological time, but still very destructive. In addition to traditional Eulerian measurements, Lagrangian flow data are essential in quantifying water and sediment transport in submarine canyons. A concerted experiment with multiple monitoring stations along the canyon axis and on nearby shelves is required to characterize the storm-trigger mechanism for turbidity currents.
2013-01-01
Background Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools. The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. Methods General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. Results We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. Conclusion The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary. PMID:24160679
Why should we care about the top quark Yukawa coupling?
Shapshnikov, Mikhail; Bezrukov, Fedor
2015-04-15
In the cosmological context, for the Standard Model to be valid up to the scale of inflation, the top quark Yukawa coupling y t should not exceed the critical value y t crit , coinciding with good precision (about 0.2‰) with the requirement of the stability of the electroweak vacuum. So, the exact measurements of y t may give an insight on the possible existence and the energy scale of new physics above 100 GeV, which is extremely sensitive to y t. In this study, we overview the most recent theoretical computations of and the experimental measurements of y tmore » crit and the experimental measurements of y t. Within the theoretical and experimental uncertainties in y t, the required scale of new physics varies from 10⁷ GeV to the Planck scale, urging for precise determination of the top quark Yukawa coupling.« less
Mass Measurements beyond the Major r-Process Waiting Point {sup 80}Zn
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baruah, S.; Herlert, A.; Schweikhard, L.
2008-12-31
High-precision mass measurements on neutron-rich zinc isotopes {sup 71m,72-81}Zn have been performed with the Penning trap mass spectrometer ISOLTRAP. For the first time, the mass of {sup 81}Zn has been experimentally determined. This makes {sup 80}Zn the first of the few major waiting points along the path of the astrophysical rapid neutron-capture process where neutron-separation energy and neutron-capture Q-value are determined experimentally. The astrophysical conditions required for this waiting point and its associated abundance signatures to occur in r-process models can now be mapped precisely. The measurements also confirm the robustness of the N=50 shell closure for Z=30.
Johnson, M L; Halvorson, H R; Ackers, G K
1976-11-30
Resolution of the linkage functions between oxygenation and subunit association-dissociation equilibria in human hemoglobin into the constituent microscopic terms has been explored by numerical simulation and least-squares analysis. The correlation properties between parameters has been studied using several choices of parameter sets in order to optimize resolution. It is found that, with currently available levels of experimental precision and ranges of variables, neither linkage function can provide sufficient resolution of all the desired energy terms. The most difficult quantities to resolve always include the dimer-tetramer association constant for unliganded hemoglobin and the oxygen binding constants to alphabeta dimers. A feasible experimental strategy for overcoming these difficulties lies in independent determination of the dimer-tetramer association constants for unliganded and fully oxygenated hemoglobin. These constants, in combination with the median lignad concentration, provide an estimate of the energy for total oxygenation of tetramers which is essentially independent of the other constituent energies. It is shown that if these separately determinable parameters are fixed, the remaining terms may be estimated to good accuracy using data which represents either linkage function. In general it is desirable to combine information from both types of experimental quantities. A previous paper (Mills, F.C., Johnson, M.L., and Ackers, G.K. (1976), Biochemestry, 15, the preceding paper in this issue) describes the experimental implementation of this strategy.
Morgan, Gilberto; Aftimos, Philippe; Awada, Ahmad
2016-09-01
Precision oncology has been a strategy of prevention, screening, and treatment. Although much has been invested, have the results fallen so far short of the promise? The advancement of technology and research has opened new doors, yet a variety of pitfalls are present. This review presents the successes, failures, and opportunities of precision oncology in the current landscape. The use of targeted gene sequencing and the overwhelming results of superresponders have generated much excitement and support for precision oncology from the medical community. Despite notable successes, many challenges still pave the way of precision oncology: intratumoral heterogeneity, the need for serial biopsies, availability of treatments, target prioritization, ethical issues with germline incidental findings, medical education, clinical trial design, and costs. Precision oncology shows much potential through the use of next-generation sequencing and molecular advances, but does this potential warrant the investment? There are many obstacles on the way of this technology that should make us question if the investment (both monetary and man-hours) will live up to the promise. The review aims to not criticize this technology, but to give a realistic view of where we are, especially regarding cancer treatment and prevention.
Scott, Jessica A; Hoffmeister, Robert J
2018-04-01
Academic English is an essential literacy skill area for success in post-secondary education and in many work environments. Despite its importance, academic English is understudied with deaf and hard of hearing (DHH) students. Nascent research in this area suggests that academic English, alongside American Sign Language (ASL) fluency, may play an important role in the reading proficiency of DHH students in middle and high school. The current study expands this research to investigate academic English by examining student proficiency with a sub-skill of academic writing called superordinate precision, the taxonomical categorization of a term. Currently there is no research that examines DHH students' proficiency with superordinate precision. Middle and high school DHH students enrolled in bilingual schools for the deaf were assessed on their ASL proficiency, academic English proficiency, reading comprehension, and use of superordinate precision in definitions writing. Findings indicate that student use of superordinate precision in definitions writing was correlated with ASL proficiency, reading comprehension, and academic English proficiency. It is possible that degree of mastery of superordinate precision may indicate a higher overall level of proficiency with academic English. This may have important implications for assessment of and instruction in academic English literacy.
Defining Action Levels for In Vivo Dosimetry in Intraoperative Electron Radiotherapy.
López-Tarjuelo, Juan; Morillo-Macías, Virginia; Bouché-Babiloni, Ana; Ferrer-Albiach, Carlos; Santos-Serra, Agustín
2016-06-01
In vivo dosimetry is recommended in intraoperative electron radiotherapy (IOERT). To perform real-time treatment monitoring, action levels (ALs) have to be calculated. Empirical approaches based on observation of samples have been reported previously, however, our aim is to present a predictive model for calculating ALs and to verify their validity with our experimental data. We considered the range of absorbed doses delivered to our detector by means of the percentage depth dose for the electron beams used. Then, we calculated the absorbed dose histograms and convoluted them with detector responses to obtain probability density functions in order to find ALs as certain probability levels. Our in vivo dosimeters were reinforced TN-502RDM-H mobile metal-oxide-semiconductor field-effect transistors (MOSFETs). Our experimental data came from 30 measurements carried out in patients undergoing IOERT for rectal, breast, sarcoma, and pancreas cancers, among others. The prescribed dose to the tumor bed was 90%, and the maximum absorbed dose was 100%. The theoretical mean absorbed dose was 90.3% and the measured mean was 93.9%. Associated confidence intervals at P = .05 were 89.2% and 91.4% and 91.6% and 96.4%, respectively. With regard to individual comparisons between the model and the experiment, 37% of MOSFET measurements lay outside particular ranges defined by the derived ALs. Calculated confidence intervals at P = .05 ranged from 8.6% to 14.7%. The model can describe global results successfully but cannot match all the experimental data reported. In terms of accuracy, this suggests an eventual underestimation of tumor bed bleeding or detector alignment. In terms of precision, it will be necessary to reduce positioning uncertainties for a wide set of location and treatment postures, and more precise detectors will be required. Planning and imaging tools currently under development will play a fundamental role. © The Author(s) 2015.
Experimental setup for precise measurement of losses in high-temperature superconducting transformer
NASA Astrophysics Data System (ADS)
Janu, Z.; Wild, J.; Repa, P.; Jelinek, Z.; Zizek, F.; Peksa, L.; Soukup, F.; Tichy, R.
2006-10-01
A simple cryogenic system for testing of the superconducting power transformer was constructed. Thermal shielding is provided by additional liquid nitrogen bath instead of super-insulation. The system, together with use of a precise nitrogen liquid level meter, permitted calorimetric measurements of losses of the 8 kVA HTS transformer with a resolution of the order of 0.1 W.
della Croce, U; Cappozzo, A; Kerrigan, D C
1999-03-01
Human movement analysis using stereophotogrammetry is based on the reconstruction of the instantaneous laboratory position of selected bony anatomical landmarks (AL). For this purpose, knowledge of an AL's position in relevant bone-embedded frames is required. Because ALs are not points but relatively large and curved areas, their identification by palpation or other means is subject to both intra- and inter-examiner variability. In addition, the local position of ALs, as reconstructed using an ad hoc experimental procedure (AL calibration), is affected by photogrammetric errors. The intra- and inter-examiner precision with which local positions of pelvis and lower limb palpable bony ALs can be identified and reconstructed were experimentally assessed. Six examiners and two subjects participated in the study. Intra- and inter-examiner precision (RMS distance from the mean position) resulted in the range 6-21 mm and 13-25 mm, respectively. Propagation of the imprecision of ALs to the orientation of bone-embedded anatomical frames and to hip, knee and ankle joint angles was assessed. Results showed that this imprecision may cause distortion in joint angle against time functions to the extent that information relative to angular movements in the range of 10 degrees or lower may be concealed. Bone geometry parameters estimated using the same data showed that the relevant precision does not allow for reliable bone geometry description. These findings, together with those relative to skin movement artefacts reported elsewhere, assist the human movement analyst's consciousness of the possible limitations involved in 3D movement analysis using stereophotogrammetry and call for improvements of the relevant experimental protocols.
High-precision half-life determination for 21Na using a 4 π gas-proportional counter
NASA Astrophysics Data System (ADS)
Finlay, P.; Laffoley, A. T.; Ball, G. C.; Bender, P. C.; Dunlop, M. R.; Dunlop, R.; Hackman, G.; Leslie, J. R.; MacLean, A. D.; Miller, D.; Moukaddam, M.; Olaizola, B.; Severijns, N.; Smith, J. K.; Southall, D.; Svensson, C. E.
2017-08-01
A high-precision half-life measurement for the superallowed β+ transition between the isospin T =1 /2 mirror nuclei 21Na and 21Ne has been performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1 /2=22.4506 (33 ) s, a result that is a factor of 4 more precise than the previous world-average half-life for 21Na and represents the single most precisely determined half-life for a transition between mirror nuclei to date. The contribution to the uncertainty in the 21Na F tmirror value due to the half-life is now reduced to the level of the nuclear-structure-dependent theoretical corrections, leaving the branching ratio as the dominant experimental uncertainty.
Self-Stabilizing Measurement of Phase
NASA Astrophysics Data System (ADS)
Vinjanampathy, Sai
2014-05-01
Measuring phase accurately constitutes one of the most important task in precision measurement science. Such measurements can be deployed to measure everything from fundamental constants to measuring detuning and tunneling rates of atoms more precisely. Quantum mechanics enhances the ultimate bounds on the precision of such measurements possible, and exploit coherence and entanglement to reduce the phase uncertainty. In this work, we will describe a method to stabilize a decohering two-level atom and use the stabilizing measurements to learn the unknown phase acquired by the atom. Such measurements will employ a Bayesian learner to do active feedback control on the atom. We will discuss some ultimate bounds employed in precision metrology and an experimental proposal for the implementation of this scheme. Financial support from Ministry of Education, Singapore.
Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S
2017-10-01
In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.
Optical nano artifact metrics using silicon random nanostructures
NASA Astrophysics Data System (ADS)
Matsumoto, Tsutomu; Yoshida, Naoki; Nishio, Shumpei; Hoga, Morihisa; Ohyagi, Yasuyuki; Tate, Naoya; Naruse, Makoto
2016-08-01
Nano-artifact metrics exploit unique physical attributes of nanostructured matter for authentication and clone resistance, which is vitally important in the age of Internet-of-Things where securing identities is critical. However, expensive and huge experimental apparatuses, such as scanning electron microscopy, have been required in the former studies. Herein, we demonstrate an optical approach to characterise the nanoscale-precision signatures of silicon random structures towards realising low-cost and high-value information security technology. Unique and versatile silicon nanostructures are generated via resist collapse phenomena, which contains dimensions that are well below the diffraction limit of light. We exploit the nanoscale precision ability of confocal laser microscopy in the height dimension; our experimental results demonstrate that the vertical precision of measurement is essential in satisfying the performances required for artifact metrics. Furthermore, by using state-of-the-art nanostructuring technology, we experimentally fabricate clones from the genuine devices. We demonstrate that the statistical properties of the genuine and clone devices are successfully exploited, showing that the liveness-detection-type approach, which is widely deployed in biometrics, is valid in artificially-constructed solid-state nanostructures. These findings pave the way for reasonable and yet sufficiently secure novel principles for information security based on silicon random nanostructures and optical technologies.
Power supply system for negative ion source at IPR
NASA Astrophysics Data System (ADS)
Gahlaut, Agrajit; Sonara, Jashwant; Parmar, K. G.; Soni, Jignesh; Bandyopadhyay, M.; Singh, Mahendrajit; Bansal, Gourab; Pandya, Kaushal; Chakraborty, Arun
2010-02-01
The first step in the Indian program on negative ion beams is the setting up of Negative ion Experimental Assembly - RF based, where 100 kW of RF power shall be coupled to a plasma source producing plasma of density ~5 × 1012 cm-3, from which ~ 10 A of negative ion beam shall be produced and accelerated to 35 kV, through an electrostatic ion accelerator. The experimental system is modelled similar to the RF based negative ion source, BATMAN presently operating at IPP, Garching, Germany. The mechanical system for Negative Ion Source Assembly is close to the IPP source, remaining systems are designed and procured principally from indigenous sources, keeping the IPP configuration as a base line. High voltage (HV) and low voltage (LV) power supplies are two key constituents of the experimental setup. The HV power supplies for extraction and acceleration are rated for high voltage (~15 to 35kV), and high current (~ 15 to 35A). Other attributes are, fast rate of voltage rise (< 5ms), good regulation (< ±1%), low ripple (< ±2%), isolation (~50kV), low energy content (< 10J) and fast cut-off (< 100μs). The low voltage (LV) supplies required for biasing and providing heating power to the Cesium oven and the plasma grids; have attributes of low ripple, high stability, fast and precise regulation, programmability and remote operation. These power supplies are also equipped with over-voltage, over-current and current limit (CC Mode) protections. Fault diagnostics, to distinguish abnormal rise in currents (breakdown faults) with over-currents is enabled using fast response breakdown and over-current protection scheme. To restrict the fault energy deposited on the ion source, specially designed snubbers are implemented in each (extraction and acceleration) high voltage path to swap the surge energy. Moreover, the monitoring status and control signals from these power supplies are required to be electrically (~ 50kV) isolated from the system. The paper shall present the design basis, topology selection, manufacturing, testing, commissioning, integration and control strategy of these HVPS. A complete power interconnection scheme, which includes all protective devices and measuring devices, low & high voltage power supplies, monitoring and control signals etc. shall also be discussed. The paper also discusses the protocols involved in grounding and shielding, particularly in operating the system in RF environment.
Feasibility of Air Levitated Surface Stage for Lithography Tool
NASA Astrophysics Data System (ADS)
Tanaka, Keiichi
The application of light-weight drive technology into the lithography stage has been the current state of art because of minimization of power loss. The purpose of this article is to point out the so-called, "surface stage" which is composed of Lorentz forced 3 DOF (Degree Of Freedom) planar motor (x, y and theta z), air levitation (bearing) system and motor cooling system, is the most balanced concept for the next generation lithography through the verification of each component by manufacturing simple parts and test stand. This paper presents the design method and procedure, and experimental results of the air levitated surface stage which was conducted several years ago, however the author is convinced that the results are enough to adapt various developments of precision machining tool.
Measurement of electrostatically formed antennas using photogrammetry and theodolites
NASA Technical Reports Server (NTRS)
Goslee, J. W.; Hinson, W. F.; Kennefick, J. F.; Mihora, D. J.
1984-01-01
An antenna concept is presently being evaluated which has extremely low mass and high surface precision for potential depolyment from the Space Shuttle. This antenna concept derives its reflector surface quality from the application of electrostatic forces to tension and form a thin membrane into the desired concave reflector surface. The Shuttle-deployed antenna would have a diameter of 100 meters and an RMS surface smoothness of 10 to 1 mm for operation at 1 to 10 GHz. NASA Langley Research Center (LaRC) has built, and is currently testing, a subscale (1/20 scale) membrane reflector model of such an antenna. Several surface measurement systems were evaluated as part of the experimental surface measuring efforts. The surface measurement systems are addressed as well as some of the preliminary measurement results.
Griffiths, A; terHaar, G; Rivens, I; Giussani, D; Lees, C
2012-12-01
Although ultrasound is an essential investigative modality in obstetrics and gynecology, the potential for therapeutic high-intensity focused ultrasound (HIFU) (also referred to as focused ultrasound surgery, FUS) to offer an alternative to invasive surgery is less well known. The ability of HIFU to create discrete regions of tissue necrosis only in precisely targeted positions by careful placement of the focus, without the need for any surgical intervention, has made HIFU of interest to those seeking noninvasive alternatives to conventional abdominal surgery. This article reviews the current experimental and clinical experience with HIFU in obstetrics and gynecology, and outlines potential future applications in fetal medicine and the challenges faced in their development. © Georg Thieme Verlag KG Stuttgart · New York.
The Higgs properties in the MSSM after the LHC Run-2
NASA Astrophysics Data System (ADS)
Zhao, Jun
2018-04-01
We scrutinize the parameter space of the SM-like Higgs boson in the minimal supersymmetric standard model (MSSM) under current experimental constraints. The constraints are from (i) the precision electroweak data and various flavor observables; (ii) the direct 22 separate ATLAS searches in Run-1; (iii) the latest LHC Run-2 Higgs data and tri-lepton search of electroweakinos. We perform a scan over the parameter space and find that the Run-2 data can further exclude a part of parameter space. For the property of the SM-like Higgs boson, its gauge couplings further approach to the SM values with a deviation below 0.1%, while its Yukawa couplings hbb¯ and hτ+τ‑ can still sizably differ from the SM predictions by several tens percent.
Multi-disciplinary methods to define RNA-protein interactions and regulatory networks.
Ascano, Manuel; Gerstberger, Stefanie; Tuschl, Thomas
2013-02-01
The advent of high-throughput technologies including deep-sequencing and protein mass spectrometry is facilitating the acquisition of large and precise data sets toward the definition of post-transcriptional regulatory networks. While early studies that investigated specific RNA-protein interactions in isolation laid the foundation for our understanding of the existence of molecular machines to assemble and process RNAs, there is a more recent appreciation of the importance of individual RNA-protein interactions that contribute to post-transcriptional gene regulation. The multitude of RNA-binding proteins (RBPs) and their many RNA targets has only been captured experimentally in recent times. In this review, we will examine current multidisciplinary approaches toward elucidating RNA-protein networks and their regulation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rheological behavior of magnetic powder mixtures for magnetic PIM
NASA Astrophysics Data System (ADS)
Kim, Sung Hun; Kim, See Jo; Park, Seong Jin; Mun, Jun Ho; Kang, Tae Gon; Park, Jang Min
2012-06-01
Powder injection molding (PIM) is a promising manufacturing technology for the net-shape production of small, complex, and precise metal or ceramic components. In order to manufacture high quality magnets using PIM, the magneto-rheological (MR) properties of the PIM feedstock, i.e. magnetic powder-binder mixture, should be investigated experimentally and theoretically. The current research aims at comprehensive understanding of the rheological characteristics of the PIM feedstock. The feedstock used in the experiment consists of strontium ferrite powder and paraffin wax. Steady and oscillatory shear tests have been carried out using a plate-and-plate rheometer, under the influence of a uniform magnetic field applied externally. Rheological properties of the PIM feedstock have been measured and characterized for various conditions by changing the temperature, the powder fraction and the magnetic flux density.
Magnesium Counteracts Vascular Calcification: Passive Interference or Active Modulation?
Ter Braake, Anique D; Shanahan, Catherine M; de Baaij, Jeroen H F
2017-08-01
Over the last decade, an increasing number of studies report a close relationship between serum magnesium concentration and cardiovascular disease risk in the general population. In end-stage renal disease, an association was found between serum magnesium and survival. Hypomagnesemia was identified as a strong predictor for cardiovascular disease in these patients. A substantial body of in vitro and in vivo studies has identified a protective role for magnesium in vascular calcification. However, the precise mechanisms and its contribution to cardiovascular protection remain unclear. There are currently 2 leading hypotheses: first, magnesium may bind phosphate and delay calcium phosphate crystal growth in the circulation, thereby passively interfering with calcium phosphate deposition in the vessel wall. Second, magnesium may regulate vascular smooth muscle cell transdifferentiation toward an osteogenic phenotype by active cellular modulation of factors associated with calcification. Here, the data supporting these major hypotheses are reviewed. The literature supports both a passive inorganic phosphate-buffering role reducing hydroxyapatite formation and an active cell-mediated role, directly targeting vascular smooth muscle transdifferentiation. However, current evidence relies on basic experimental designs that are often insufficient to delineate the underlying mechanisms. The field requires more advanced experimental design, including determination of intracellular magnesium concentrations and the identification of the molecular players that regulate magnesium concentrations in vascular smooth muscle cells. © 2017 American Heart Association, Inc.
Spin-charge coupled dynamics driven by a time-dependent magnetization
NASA Astrophysics Data System (ADS)
Tölle, Sebastian; Eckern, Ulrich; Gorini, Cosimo
2017-03-01
The spin-charge coupled dynamics in a thin, magnetized metallic system are investigated. The effective driving force acting on the charge carriers is generated by a dynamical magnetic texture, which can be induced, e.g., by a magnetic material in contact with a normal-metal system. We consider a general inversion-asymmetric substrate/normal-metal/magnet structure, which, by specifying the precise nature of each layer, can mimic various experimentally employed setups. Inversion symmetry breaking gives rise to an effective Rashba spin-orbit interaction. We derive general spin-charge kinetic equations which show that such spin-orbit interaction, together with anisotropic Elliott-Yafet spin relaxation, yields significant corrections to the magnetization-induced dynamics. In particular, we present a consistent treatment of the spin density and spin current contributions to the equations of motion, inter alia, identifying a term in the effective force which appears due to a spin current polarized parallel to the magnetization. This "inverse-spin-filter" contribution depends markedly on the parameter which describes the anisotropy in spin relaxation. To further highlight the physical meaning of the different contributions, the spin-pumping configuration of typical experimental setups is analyzed in detail. In the two-dimensional limit the buildup of dc voltage is dominated by the spin-galvanic (inverse Edelstein) effect. A measuring scheme that could isolate this contribution is discussed.
[Experimental studies of micromotor headpieces].
Kanaev, V F; Repin, V A
1982-01-01
Experimental studies of handpieces for micromotors have been performed to make more precise their operating parameters. The special stand has been used for the measurements of the following data: head temperature, power losses in handpieces at no-load, and operating power required for machining by means of spherical burrs. The experimental results made it possible to specify more exactly the range of handpiece rotational speeds and to select optimum loads under reliability testing.
NASA Astrophysics Data System (ADS)
Vitshas, A. A.; Zelentsov, A. G.; Lopota, V. A.; Menakhin, V. P.; Panchenko, V. P.; Soroka, A. M.
2014-02-01
The results of the experimental and theoretical investigations aimed at determining the characteristics and features of precision slot cutting with a large number of calibers in sheets of low-carbon steel using the radiation of a single-mode fiber laser with pulse power up to 1 kW are presented. The description of the experimental installation, performance conditions of investigations, and variable parameters are described. Precision cutting of low-carbon steel up to 10 mm with the number of calibers ranging from 30 to 70 at a slot width of ≈60 μm is performed for the first time. Such cutting occurs only in the pulsed-periodic mode using single-mode radiation with a pulse duration of 2-3 ms, a pulse ratio of 2-4, and oxygen, whose influence differs in principle both in various cut regions over the sheet thickness and from cutting with a CO2 laser. The cutting velocity (100-50 mm/min) of sheet steel up to thicknesses of 10 mm with deep channeling, roughness parameters, hardness of the cut surface, which insignificantly (by ≈20%) exceeds the hardness of untreated steel, the phase structure of steel, and the scales of their varying inside metal are measured. The efficiency (≈3%) of precision cutting and the efficiency of transportation of radiation (25%) in large-caliber slot orifices in the "waveguide" mode are determined by the experimental data. The useful specific energy contribution of the laser radiation is w l = N l/( hbv) ≈ 2 × 1012 J/m2 for all studied thicknesses of sheet samples accurate to 20%. A qualitative model of the laser-oxygen precision cutting with deep channeling, which explains the cyclic and interrupting character of cutting and necessity of using oxygen as the cutting gas, is proposed.
Buchner, Lena; Güntert, Peter
2015-02-03
Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.
Study on high-precision measurement of long radius of curvature
NASA Astrophysics Data System (ADS)
Wu, Dongcheng; Peng, Shijun; Gao, Songtao
2016-09-01
It is hard to get high-precision measurement of the radius of curvature (ROC), because of many factors that affect the measurement accuracy. For the measurement of long radius of curvature, some factors take more important position than others'. So, at first this paper makes some research about which factor is related to the long measurement distance, and also analyse the uncertain of the measurement accuracy. At second this article also study the influence about the support status and the adjust error about the cat's eye and confocal position. At last, a 1055micrometer radius of curvature convex is measured in high-precision laboratory. Experimental results show that the proper steady support (three-point support) can guarantee the high-precision measurement of radius of curvature. Through calibrating the gain of cat's eye and confocal position, is useful to ensure the precise position in order to increase the measurement accuracy. After finish all the above process, the high-precision long ROC measurement is realized.
Griendling, Kathy K; Touyz, Rhian M; Zweier, Jay L; Dikalov, Sergey; Chilian, William; Chen, Yeong-Renn; Harrison, David G; Bhatnagar, Aruni
2016-08-19
Reactive oxygen species and reactive nitrogen species are biological molecules that play important roles in cardiovascular physiology and contribute to disease initiation, progression, and severity. Because of their ephemeral nature and rapid reactivity, these species are difficult to measure directly with high accuracy and precision. In this statement, we review current methods for measuring these species and the secondary products they generate and suggest approaches for measuring redox status, oxidative stress, and the production of individual reactive oxygen and nitrogen species. We discuss the strengths and limitations of different methods and the relative specificity and suitability of these methods for measuring the concentrations of reactive oxygen and reactive nitrogen species in cells, tissues, and biological fluids. We provide specific guidelines, through expert opinion, for choosing reliable and reproducible assays for different experimental and clinical situations. These guidelines are intended to help investigators and clinical researchers avoid experimental error and ensure high-quality measurements of these important biological species. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Liu, Chun-Xiao; Sau, Jay D.; Das Sarma, S.
2018-06-01
Trivial Andreev bound states arising from chemical-potential variations could lead to zero-bias tunneling conductance peaks at finite magnetic field in class-D nanowires, precisely mimicking the predicted zero-bias conductance peaks arising from the topological Majorana bound states. This finding raises a serious question on the efficacy of using zero-bias tunneling conductance peaks, by themselves, as evidence supporting the existence of topological Majorana bound states in nanowires. In the current work, we provide specific experimental protocols for tunneling spectroscopy measurements to distinguish between Andreev and Majorana bound states without invoking more demanding nonlocal measurements which have not yet been successfully performed in nanowire systems. In particular, we discuss three distinct experimental schemes involving the response of the zero-bias peak to local perturbations of the tunnel barrier, the overlap of bound states from the wire ends, and, most compellingly, introducing a sharp localized potential in the wire itself to perturb the zero-bias tunneling peaks. We provide extensive numerical simulations clarifying and supporting our theoretical predictions.
Some reflections on the understanding of the oxygen reduction reaction at Pt(111)
Gómez-Marín, Ana M; Rizo, Ruben
2013-01-01
Summary The oxygen reduction reaction (ORR) is a pivotal process in electrochemistry. Unfortunately, after decades of intensive research, a fundamental knowledge about its reaction mechanism is still lacking. In this paper, a global and critical view on the most important experimental and theoretical results regarding the ORR on Pt(111) and its vicinal surfaces, in both acidic and alkaline media, is taken. Phenomena such as the ORR surface structure sensitivity and the lack of a reduction current at high potentials are discussed in the light of the surface oxidation and disordering processes and the possible relevance of the hydrogen peroxide reduction and oxidation reactions in the ORR mechanism. The necessity to build precise and realistic reaction models, which are deducted from reliable experimental results that need to be carefully taken under strict working conditions is shown. Therefore, progress in the understanding of this important reaction on a molecular level, and the choice of the right approach for the design of the electrocatalysts for fuel-cell cathodes is only possible through a cooperative approach between theory and experiments. PMID:24455454
Some reflections on the understanding of the oxygen reduction reaction at Pt(111).
Gómez-Marín, Ana M; Rizo, Ruben; Feliu, Juan M
2013-12-27
The oxygen reduction reaction (ORR) is a pivotal process in electrochemistry. Unfortunately, after decades of intensive research, a fundamental knowledge about its reaction mechanism is still lacking. In this paper, a global and critical view on the most important experimental and theoretical results regarding the ORR on Pt(111) and its vicinal surfaces, in both acidic and alkaline media, is taken. Phenomena such as the ORR surface structure sensitivity and the lack of a reduction current at high potentials are discussed in the light of the surface oxidation and disordering processes and the possible relevance of the hydrogen peroxide reduction and oxidation reactions in the ORR mechanism. The necessity to build precise and realistic reaction models, which are deducted from reliable experimental results that need to be carefully taken under strict working conditions is shown. Therefore, progress in the understanding of this important reaction on a molecular level, and the choice of the right approach for the design of the electrocatalysts for fuel-cell cathodes is only possible through a cooperative approach between theory and experiments.
Molecular Modeling of a Probe in 2D IR Spectroscopy
NASA Astrophysics Data System (ADS)
Cooper, Anthony; Larini, Luca
Proteins must adopt a precise three dimensional structure in the folding process in order to perform its designated function. Although much has been learned about folding, there are still many details in structural dynamics that are difficult to characterize by existing experimental techniques. In order to overcome these challenges, novel infrared and fluorescent spectroscopic techniques have recently been employed to probe the molecular structure at the atomistic scale. These techniques rely on the spectroscopic properties of the nitrile group attached to a phenylalanine. In this study, we model this probe and we compute its properties in different solvents. This is done by performing Molecular Dynamics simulations with a PheCN solvated in water, urea and TMAO. We measure the decay rate of the vibrational stretching of the CN group in order to characterize the effects of different solvents on the local structure of the molecule. This data can be used to identify non-trivial conformational changes of the protein in the folding process. Preliminary results show agreement with current experimental data on 2D IR spectroscopy.
NASA Astrophysics Data System (ADS)
Ehrhart, Matthias; Lienhart, Werner
2017-09-01
The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.
NASA Astrophysics Data System (ADS)
López-Sánchez, M.; Mansilla-Plaza, L.; Sánchez-de-laOrden, M.
2017-10-01
Prior to field scale research, soil samples are analysed on a laboratory scale for electrical resistivity calibrations. Currently, there are a variety of field instruments to estimate the water content in soils using different physical phenomena. These instruments can be used to develop moisture-resistivity relationships on the same soil samples. This assures that measurements are performed on the same material and under the same conditions (e.g., humidity and temperature). A geometric factor is applied to the location of electrodes, in order to calculate the apparent electrical resistivity of the laboratory test cells. This geometric factor can be determined in three different ways: by means of the use of an analytical approximation, laboratory trials (experimental approximation), or by the analysis of a numerical model. The first case, the analytical approximation, is not appropriate for complex cells or arrays. And both, the experimental and numerical approximation can lead to inaccurate results. Therefore, we propose a novel approach to obtain a compromise solution between both techniques, providing a more precise determination of the geometrical factor.
Development of a revolute-joint robot for the precision positioning of an x-ray detector
NASA Astrophysics Data System (ADS)
Preissner, Curt A.; Royston, Thomas J.; Shu, Deming
2003-10-01
This paper profiles the initial phase in the development of a six degree-of-freedom robot, with 1 μm dynamic positioning uncertainty, for the manipulation of x-ray detectors or test specimens at the Advanced Photon Source (APS). While revolute-joint robot manipulators exhibit a smaller footprint along with increased positioning flexibility compared to Cartesian manipulators, commercially available revolute-joint manipulators do not meet our size, positioning, or environmental specifications. Currently, a robot with 20 μm dynamic positioning uncertainty is functioning at the APS for cryogenic crystallography sample pick-and-place operation. Theoretical, computational and experimental procedures are being used to (1) identify and (2) simulate the dynamics of the present robot system using a multibody approach, including the mechanics and control architecture, and eventually to (3) design an improved version with a 1 μm dynamic positioning uncertainty. We expect that the preceding experimental and theoretical techniques will be useful design and analysis tools as multi-degree-of-freedom manipulators become more prevalent on synchrotron beamlines.
Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.
2017-01-01
The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195
In situ thermomechanical testing methods for micro/nano-scale materials.
Kang, Wonmo; Merrill, Marriner; Wheeler, Jeffrey M
2017-02-23
The advance of micro/nanotechnology in energy-harvesting, micropower, electronic devices, and transducers for automobile and aerospace applications has led to the need for accurate thermomechanical characterization of micro/nano-scale materials to ensure their reliability and performance. This persistent need has driven various efforts to develop innovative experimental techniques that overcome the critical challenges associated with precise mechanical and thermal control of micro/nano-scale specimens during material characterization. Here we review recent progress in the development of thermomechanical testing methods from miniaturized versions of conventional macroscopic test systems to the current state of the art of in situ uniaxial testing capabilities in electron microscopes utilizing either indentation-based microcompression or integrated microsystems. We discuss the major advantages/disadvantages of these methods with respect to specimen size, range of temperature control, ease of experimentation and resolution of the measurements. We also identify key challenges in each method. Finally, we summarize some of the important discoveries that have been made using in situ thermomechanical testing and the exciting research opportunities still to come in micro/nano-scale materials.
The use of MR B+1 imaging for validation of FDTD electromagnetic simulations of human anatomies.
Van den Berg, Cornelis A T; Bartels, Lambertus W; van den Bergen, Bob; Kroeze, Hugo; de Leeuw, Astrid A C; Van de Kamer, Jeroen B; Lagendijk, Jan J W
2006-10-07
In this study, MR B(+)(1) imaging is employed to experimentally verify the validity of FDTD simulations of electromagnetic field patterns in human anatomies. Measurements and FDTD simulations of the B(+)(1) field induced by a 3 T MR body coil in a human corpse were performed. It was found that MR B(+)(1) imaging is a sensitive method to measure the radiofrequency (RF) magnetic field inside a human anatomy with a precision of approximately 3.5%. A good correlation was found between the B(+)(1) measurements and FDTD simulations. The measured B(+)(1) pattern for a human pelvis consisted of a global, diagonal modulation pattern plus local B(+)(1) heterogeneties. It is believed that these local B(+)(1) field variations are the result of peaks in the induced electric currents, which could not be resolved by the FDTD simulations on a 5 mm(3) simulation grid. The findings from this study demonstrate that B(+)(1) imaging is a valuable experimental technique to gain more knowledge about the dielectric interaction of RF fields with the human anatomy.
Influence of local topography on precision irrigation management
USDA-ARS?s Scientific Manuscript database
Precision irrigation management is currently accomplished using spatial information about soil properties through soil series maps or electrical conductivity (EC measurements. Crop yield, however, is consistently influenced by local topography, both in rain-fed and irrigated environments. Utilizing ...
Chen, Guoli; Yang, Zhaohai; Eshleman, James R.; Netto, George J.
2016-01-01
Precision medicine, a concept that has recently emerged and has been widely discussed, emphasizes tailoring medical care to individuals largely based on information acquired from molecular diagnostic testing. As a vital aspect of precision cancer medicine, targeted therapy has been proven to be efficacious and less toxic for cancer treatment. Colorectal cancer (CRC) is one of the most common cancers and among the leading causes for cancer related deaths in the United States and worldwide. By far, CRC has been one of the most successful examples in the field of precision cancer medicine, applying molecular tests to guide targeted therapy. In this review, we summarize the current guidelines for anti-EGFR therapy, revisit the roles of pathologists in an era of precision cancer medicine, demonstrate the transition from traditional “one test-one drug” assays to multiplex assays, especially by using next-generation sequencing platforms in the clinical diagnostic laboratories, and discuss the future perspectives of tumor heterogeneity associated with anti-EGFR resistance and immune checkpoint blockage therapy in CRC. PMID:27699178
Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2016-12-01
In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.
Sensor-less pseudo-sinusoidal drive for a permanent-magnet brushless ac motor
NASA Astrophysics Data System (ADS)
Liu, Li-Hsiang; Chern, Tzuen-Lih; Pan, Ping-Lung; Huang, Tsung-Mou; Tsay, Der-Min; Kuang, Jao-Hwa
2012-04-01
The precise rotor-position information is required for a permanent-magnet brushless ac motor (BLACM) drive. In the conventional sinusoidal drive method, either an encoder or a resolver is usually employed. For position sensor-less vector control schemes, the rotor flux estimation and torque components are obtained by complicated coordinate transformations. These computational intensive methods are susceptible to current distortions and parameter variations. To simplify the method complexity, this work presents a sensor-less pseudo-sinusoidal drive scheme with speed control for a three-phase BLACM. Based on the sinusoidal drive scheme, a floating period of each phase current is inserted for back electromotive force detection. The zero-crossing point is determined directly by the proposed scheme, and the rotor magnetic position and rotor speed can be estimated simultaneously. Several experiments for various active angle periods are undertaken. Furthermore, a current feedback control is included to minimize and compensate the torque fluctuation. The experimental results show that the proposed method has a competitive performance compared with the conventional drive manners for BLACM. The proposed scheme is straightforward, bringing the benefits of sensor-less drive and negating the need for coordinate transformations in the operating process.
Design and model for the giant magnetostrictive actuator used on an electronic controlled injector
NASA Astrophysics Data System (ADS)
Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Ben; Rong, Ce
2017-05-01
Giant magnetostrictive actuator (GMA) may be a promising candidate actuator to drive an electronic controlled injector as giant magnetostrictive material (GMM) has excellent performances as large output, fast response and high operating stability etc. To meet the driving requirement of the injector, the GMA should produce maximal shortening displacement when energized. An unbiased GMA with a ‘T’ shaped output rod is designed to reach the target. Furthermore, an open-hold-fall type driving voltage is exerted on the actuator coil to accelerate the response speed of the coil current. The actuator displacement is modeled from establishing the sub-models of coil current, magnetic field within GMM rod, magnetization and magnetostrictive strain sequentially. Two modifications are done to make the model more accurate. Firstly, consider the model fails to compute the transient-state response precisely, a dead-zone and delay links are embedded into the coil current sub-model. Secondly, as the magnetization and magnetostrictive strain sub-models just influence the change rule of the transient-state response the linear magnetostrictive strain-magnetic field sub-model is introduced. From experimental results, the modified model with linear magnetostrictive stain expression can predict the actuator displacement quite effectively.
Sturgeon, John A.; Johnson, Kevin A.
2017-01-01
Pain catastrophizing, a pattern of negative cognitive-emotional responses to actual or anticipated pain, maintains chronic pain and undermines response to treatments. Currently, precisely how pain catastrophizing influences pain processing is not well understood. In experimental settings, pain catastrophizing has been associated with amplified pain processing. This study sought to clarify pain processing mechanisms via experimental induction of pain catastrophizing. Forty women with chronic low back pain were assigned in blocks to an experimental condition, either a psychologist-led 10-minute pain catastrophizing induction or a control (10-minute rest period). All participants underwent a baseline round of several quantitative sensory testing (QST) tasks, followed by the pain catastrophizing induction or the rest period, and then a second round of the same QST tasks. The catastrophizing induction appeared to increase state pain catastrophizing levels. Changes in QST pain were detected for two of the QST tasks administered, weighted pin pain and mechanical allodynia. Although there is a need to replicate our preliminary results with a larger sample, study findings suggest a potential relationship between induced pain catastrophizing and central sensitization of pain. Clarification of the mechanisms through which catastrophizing affects pain modulatory systems may yield useful clinical insights into the treatment of chronic pain. PMID:28348505
Shi, Ni; Clinton, Steven K.; Liu, Zhihua; Wang, Yongquan; Riedl, Kenneth M.; Schwartz, Steven J.; Zhang, Xiaoli; Pan, Zui; Chen, Tong
2015-01-01
Human and experimental colon carcinogenesis are enhanced by a pro-inflammatory microenvironment. Pharmacologically driven chemopreventive agents and dietary variables are hypothesized to have future roles in the prevention of colon cancer by targeting these processes. The current study was designed to determine the ability of dietary lyophilized strawberries to inhibit inflammation-promoted colon carcinogenesis in a preclinical animal model. Mice were given a single i.p. injection of azoxymethane (10 mg kg−1 body weight). One week after injection, mice were administered 2% (w/v) dextran sodium sulfate in drinking water for seven days and then an experimental diet containing chemically characterized lyophilized strawberries for the duration of the bioassay. Mice fed control diet, or experimental diet containing 2.5%, 5.0% or 10.0% strawberries displayed tumor incidence of 100%, 64%, 75% and 44%, respectively (p < 0.05). The mechanistic studies demonstrate that strawberries reduced expression of proinflammatory mediators, suppressed nitrosative stress and decreased phosphorylation of phosphatidylinositol 3-kinase, Akt, extracellular signal-regulated kinase and nuclear factor kappa B. In conclusion, strawberries target proinflammatory mediators and oncogenic signaling for the preventive efficacies against colon carcinogenesis in mice. This works supports future development of fully characterized and precisely controlled functional foods for testing in human clinical trials for this disease. PMID:25763529
Prediction of enhancer-promoter interactions via natural language processing.
Zeng, Wanwen; Wu, Mengmeng; Jiang, Rui
2018-05-09
Precise identification of three-dimensional genome organization, especially enhancer-promoter interactions (EPIs), is important to deciphering gene regulation, cell differentiation and disease mechanisms. Currently, it is a challenging task to distinguish true interactions from other nearby non-interacting ones since the power of traditional experimental methods is limited due to low resolution or low throughput. We propose a novel computational framework EP2vec to assay three-dimensional genomic interactions. We first extract sequence embedding features, defined as fixed-length vector representations learned from variable-length sequences using an unsupervised deep learning method in natural language processing. Then, we train a classifier to predict EPIs using the learned representations in supervised way. Experimental results demonstrate that EP2vec obtains F1 scores ranging from 0.841~ 0.933 on different datasets, which outperforms existing methods. We prove the robustness of sequence embedding features by carrying out sensitivity analysis. Besides, we identify motifs that represent cell line-specific information through analysis of the learned sequence embedding features by adopting attention mechanism. Last, we show that even superior performance with F1 scores 0.889~ 0.940 can be achieved by combining sequence embedding features and experimental features. EP2vec sheds light on feature extraction for DNA sequences of arbitrary lengths and provides a powerful approach for EPIs identification.
The functional basis of adaptive evolution in chemostats.
Gresham, David; Hong, Jungeui
2015-01-01
Two of the central problems in biology are determining the molecular basis of adaptive evolution and understanding how cells regulate their growth. The chemostat is a device for culturing cells that provides great utility in tackling both of these problems: it enables precise control of the selective pressure under which organisms evolve and it facilitates experimental control of cell growth rate. The aim of this review is to synthesize results from studies of the functional basis of adaptive evolution in long-term chemostat selections using Escherichia coli and Saccharomyces cerevisiae. We describe the principle of the chemostat, provide a summary of studies of experimental evolution in chemostats, and use these studies to assess our current understanding of selection in the chemostat. Functional studies of adaptive evolution in chemostats provide a unique means of interrogating the genetic networks that control cell growth, which complements functional genomic approaches and quantitative trait loci (QTL) mapping in natural populations. An integrated approach to the study of adaptive evolution that accounts for both molecular function and evolutionary processes is critical to advancing our understanding of evolution. By renewing efforts to integrate these two research programs, experimental evolution in chemostats is ideally suited to extending the functional synthesis to the study of genetic networks. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
Fundamental Physics with Antihydrogen
NASA Astrophysics Data System (ADS)
Hangst, J. S.
Antihydrogen—the antimatter equivalent of the hydrogen atom—is of fundamental interest as a test bed for universal symmetries—such as CPT and the Weak Equivalence Principle for gravitation. Invariance under CPT requires that hydrogen and antihydrogen have the same spectrum. Antimatter is of course intriguing because of the observed baryon asymmetry in the universe—currently unexplained by the Standard Model. At the CERN Antiproton Decelerator (AD) [
Experimental progress in positronium laser physics
NASA Astrophysics Data System (ADS)
Cassidy, David B.
2018-03-01
The field of experimental positronium physics has advanced significantly in the last few decades, with new areas of research driven by the development of techniques for trapping and manipulating positrons using Surko-type buffer gas traps. Large numbers of positrons (typically ≥106) accumulated in such a device may be ejected all at once, so as to generate an intense pulse. Standard bunching techniques can produce pulses with ns (mm) temporal (spatial) beam profiles. These pulses can be converted into a dilute Ps gas in vacuum with densities on the order of 107 cm-3 which can be probed by standard ns pulsed laser systems. This allows for the efficient production of excited Ps states, including long-lived Rydberg states, which in turn facilitates numerous experimental programs, such as precision optical and microwave spectroscopy of Ps, the application of Stark deceleration methods to guide, decelerate and focus Rydberg Ps beams, and studies of the interactions of such beams with other atomic and molecular species. These methods are also applicable to antihydrogen production and spectroscopic studies of energy levels and resonances in positronium ions and molecules. A summary of recent progress in this area will be given, with the objective of providing an overview of the field as it currently exists, and a brief discussion of some future directions.
Taub, Chloe J; Sturgeon, John A; Johnson, Kevin A; Mackey, Sean C; Darnall, Beth D
2017-01-01
Pain catastrophizing, a pattern of negative cognitive-emotional responses to actual or anticipated pain, maintains chronic pain and undermines response to treatments. Currently, precisely how pain catastrophizing influences pain processing is not well understood. In experimental settings, pain catastrophizing has been associated with amplified pain processing. This study sought to clarify pain processing mechanisms via experimental induction of pain catastrophizing. Forty women with chronic low back pain were assigned in blocks to an experimental condition, either a psychologist-led 10-minute pain catastrophizing induction or a control (10-minute rest period). All participants underwent a baseline round of several quantitative sensory testing (QST) tasks, followed by the pain catastrophizing induction or the rest period, and then a second round of the same QST tasks. The catastrophizing induction appeared to increase state pain catastrophizing levels. Changes in QST pain were detected for two of the QST tasks administered, weighted pin pain and mechanical allodynia. Although there is a need to replicate our preliminary results with a larger sample, study findings suggest a potential relationship between induced pain catastrophizing and central sensitization of pain. Clarification of the mechanisms through which catastrophizing affects pain modulatory systems may yield useful clinical insights into the treatment of chronic pain.
Experimental uncertainty survey and assessment. [Space Shuttle Main Engine testing
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.
1992-01-01
An uncertainty analysis and assessment of the specific impulse determination during Space Shuttle Main Engine testing is reported. It is concluded that in planning and designing tests and in interpreting the results of tests, the bias and precision components of experimental uncertainty should be considered separately. Recommendations for future research efforts are presented.
Gain and loss of moisture in large forest fuels
Arthur P. Brackebusch
1975-01-01
Equations for predicting moisture in large fuels were developed from data gathered at Priest River Experimental Forest and Boise Basin Experimental Forest. The most important variables were beginning moisture content of the fuel, duration of precipitation, amount of precipitation, and the sum of the mean temperature of an observation period. Sensitivity and precision...
Method of high precision interval measurement in pulse laser ranging system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
Present situation and trend of precision guidance technology and its intelligence
NASA Astrophysics Data System (ADS)
Shang, Zhengguo; Liu, Tiandong
2017-11-01
This paper first introduces the basic concepts of precision guidance technology and artificial intelligence technology. Then gives a brief introduction of intelligent precision guidance technology, and with the help of development of intelligent weapon based on deep learning project in foreign: LRASM missile project, TRACE project, and BLADE project, this paper gives an overview of the current foreign precision guidance technology. Finally, the future development trend of intelligent precision guidance technology is summarized, mainly concentrated in the multi objectives, intelligent classification, weak target detection and recognition, intelligent between complex environment intelligent jamming and multi-source, multi missile cooperative fighting and other aspects.
[Current situation and thoughts on precision medicine about the treatment of tumor in China].
Guo, J C; Yuan, D
2016-07-01
With United States starting"precision medical plan", it is widespread all over the world and opens a new direction to the development of medicine. Our country also starts the plan, trying to take the opportunity. At present, tumor threats human health with high incidence and mortality. In China, the incidence and mortality of tumor has been on the rise.So the tumor has become one of the most important fields of precision medicine.Precision medicine, hoping to reveal the Chinese characteristics of precision medicine, and getting the personal and social maximize health benefits are discussed in the paper.
Dual current readout for precision plating
NASA Technical Reports Server (NTRS)
Iceland, W. F.
1970-01-01
Bistable amplifier prevents damage in the low range circuitry of a dual scale ammeter. It senses the current and switches automatically to the high range circuitry as the current rises above a preset level.
ERIC Educational Resources Information Center
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Whale, Alexandra S; Huggett, Jim F; Cowen, Simon; Speirs, Valerie; Shaw, Jacqui; Ellison, Stephen; Foy, Carole A; Scott, Daniel J
2012-06-01
One of the benefits of Digital PCR (dPCR) is the potential for unparalleled precision enabling smaller fold change measurements. An example of an assessment that could benefit from such improved precision is the measurement of tumour-associated copy number variation (CNV) in the cell free DNA (cfDNA) fraction of patient blood plasma. To investigate the potential precision of dPCR and compare it with the established technique of quantitative PCR (qPCR), we used breast cancer cell lines to investigate HER2 gene amplification and modelled a range of different CNVs. We showed that, with equal experimental replication, dPCR could measure a smaller CNV than qPCR. As dPCR precision is directly dependent upon both the number of replicate measurements and the template concentration, we also developed a method to assist the design of dPCR experiments for measuring CNV. Using an existing model (based on Poisson and binomial distributions) to derive an expression for the variance inherent in dPCR, we produced a power calculation to define the experimental size required to reliably detect a given fold change at a given template concentration. This work will facilitate any future translation of dPCR to key diagnostic applications, such as cancer diagnostics and analysis of cfDNA.
NASA Astrophysics Data System (ADS)
Jiang, Shanchao; Wang, Jing; Sui, Qingmei
2018-03-01
In order to achieve rotation angle measurement, one novel type of miniaturization fiber Bragg grating (FBG) rotation angle sensor with high measurement precision and temperature self-compensation is proposed and studied in this paper. The FBG rotation angle sensor mainly contains two core sensitivity elements (FBG1 and FBG2), triangular cantilever beam, and rotation angle transfer element. In theory, the proposed sensor can achieve temperature self-compensation by complementation of the two core sensitivity elements (FBG1 and FBG2), and it has a boundless angel measurement range with 2π rad period duo to the function of the rotation angle transfer element. Based on introducing the joint working processes, the theory calculation model of the FBG rotation angel sensor is established, and the calibration experiment on one prototype is also carried out to obtain its measurement performance. After experimental data analyses, the measurement precision of the FBG rotation angle sensor prototype is 0.2 ° with excellent linearity, and the temperature sensitivities of FBG1 and FBG2 are 10 pm/° and 10.1 pm/°, correspondingly. All these experimental results confirm that the FBG rotation angle sensor can achieve large-range angle measurement with high precision and temperature self-compensation.
Gotti, Riccardo; Gatti, Davide; Masłowski, Piotr; Lamperti, Marco; Belmonte, Michele; Laporta, Paolo; Marangoni, Marco
2017-10-07
We propose a novel approach to cavity-ring-down-spectroscopy (CRDS) in which spectra acquired with a frequency-agile rapid-scanning (FARS) scheme, i.e., with a laser sideband stepped across the modes of a high-finesse cavity, are interleaved with one another by a sub-millisecond readjustment of the cavity length. This brings to time acquisitions below 20 s for few-GHz-wide spectra composed of a very high number of spectral points, typically 3200. Thanks to the signal-to-noise ratio easily in excess of 10 000, each FARS-CRDS spectrum is shown to be sufficient to determine the line-centre frequency of a Doppler broadened line with a precision of 2 parts over 10 11 , thus very close to that of sub-Doppler regimes and in a few-seconds time scale. The referencing of the probe laser to a frequency comb provides absolute accuracy and long-term reproducibility to the spectrometer and makes it a powerful tool for precision spectroscopy and line-shape analysis. The experimental approach is discussed in detail together with experimental precision and accuracy tests on the (30 012) ← (00 001) P12e line of CO 2 at ∼1.57 μm.
Precision medicine for nurses: 101.
Lemoine, Colleen
2014-05-01
To introduce the key concepts and terms associated with precision medicine and support understanding of future developments in the field by providing an overview and history of precision medicine, related ethical considerations, and nursing implications. Current nursing, medical and basic science literature. Rapid progress in understanding the oncogenic drivers associated with cancer is leading to a shift toward precision medicine, where treatment is based on targeting specific genetic and epigenetic alterations associated with a particular cancer. Nurses will need to embrace the paradigm shift to precision medicine, expend the effort necessary to learn the essential terminology, concepts and principles, and work collaboratively with physician colleagues to best position our patients to maximize the potential that precision medicine can offer. Copyright © 2014 Elsevier Inc. All rights reserved.
Review on the progress of ultra-precision machining technologies
NASA Astrophysics Data System (ADS)
Yuan, Julong; Lyu, Binghai; Hang, Wei; Deng, Qianfa
2017-06-01
Ultra-precision machining technologies are the essential methods, to obtain the highest form accuracy and surface quality. As more research findings are published, such technologies now involve complicated systems engineering and been widely used in the production of components in various aerospace, national defense, optics, mechanics, electronics, and other high-tech applications. The conception, applications and history of ultra-precision machining are introduced in this article, and the developments of ultra-precision machining technologies, especially ultra-precision grinding, ultra-precision cutting and polishing are also reviewed. The current state and problems of this field in China are analyzed. Finally, the development trends of this field and the coping strategies employed in China to keep up with the trends are discussed.
Forming Mandrels for X-Ray Mirror Substrates
NASA Technical Reports Server (NTRS)
Blake, Peter N.; Saha. To,p; Zhang, Will; O'Dell, Stephen; Kester, Thomas; Jones, William
2011-01-01
Precision forming mandrels are one element in X-ray mirror development at NASA. Current mandrel fabrication process is capable of meeting the allocated precision requirements for a 5 arcsec telescope. A manufacturing plan is outlined for a large IXO-scale program.
Sarrouti, Mourad; Ouatik El Alaoui, Said
2017-04-01
Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.
Alternative Solvents and Technologies for Precision Cleaning of Aerospace Components
NASA Technical Reports Server (NTRS)
Grandelli, Heather; Maloney, Phillip; DeVor, Robert; Hintze, Paul
2014-01-01
Precision cleaning solvents for aerospace components and oxygen fuel systems, including currently used Vertrel-MCA, have a negative environmental legacy, high global warming potential, and have polluted cleaning sites. Thus, alternative solvents and technologies are being investigated with the aim of achieving precision contamination levels of less than 1 mg/sq ft. The technologies being evaluated are ultrasonic bath cleaning, plasma cleaning and supercritical carbon dioxide cleaning.
NASA Astrophysics Data System (ADS)
Hishikawa, Yoshihiro; Doi, Takuya; Higa, Michiya; Ohshima, Hironori; Takenouchi, Takakazu; Yamagoe, Kengo
2017-08-01
Precise outdoor measurement of the current-voltage (I-V) curves of photovoltaic (PV) modules is desired for many applications such as low-cost onsite performance measurement, monitoring, and diagnosis. Conventional outdoor measurement technologies have a problem in that their precision is low when the solar irradiance is unstable, hence, limiting the opportunity of precise measurement only on clear sunny days. The purpose of this study is to investigate an outdoor measurement procedure, that can improve both the measurement opportunity and precision. Fast I-V curve measurements within 0.2 s and synchronous measurement of irradiance using a PV module irradiance sensor very effectively improved the precision. A small standard deviation (σ) of the module’s maximum output power (P max) in the range of 0.7-0.9% is demonstrated, based on the basis of a 6 month experiment, that mainly includes partly sunny days and cloudy days, during which the solar irradiance is unstable. The σ was further improved to 0.3-0.5% by correcting the curves for the small variation of irradiance. This indicates that the procedure of this study enables much more reproducible I-V curve measurements than a conventional usual procedure under various climatic conditions. Factors that affect measurement results are discussed, to further improve the precision.
Toward the use of precision medicine for the treatment of head and neck squamous cell carcinoma.
Gong, Wang; Xiao, Yandi; Wei, Zihao; Yuan, Yao; Qiu, Min; Sun, Chongkui; Zeng, Xin; Liang, Xinhua; Feng, Mingye; Chen, Qianming
2017-01-10
Precision medicine is a new strategy that aims at preventing and treating human diseases by focusing on individual variations in people's genes, environment and lifestyle. Precision medicine has been used for cancer diagnosis and treatment and shows evident clinical efficacy. Rapid developments in molecular biology, genetics and sequencing technologies, as well as computational technology, has enabled the establishment of "big data", such as the Human Genome Project, which provides a basis for precision medicine. Head and neck squamous cell carcinoma (HNSCC) is an aggressive cancer with a high incidence rate and low survival rate. Current therapies are often aggressive and carry considerable side effects. Much research now indicates that precision medicine can be used for HNSCC and may achieve improved results. From this perspective, we present an overview of the current status, potential strategies, and challenges of precision medicine in HNSCC. We focus on targeted therapy based on cell the surface signaling receptors epidermal growth factor receptor (EGFR), vascular endothelial growth factor (VEGF) and human epidermal growth factor receptor-2 (HER2), and on the PI3K/AKT/mTOR, JAK/STAT3 and RAS/RAF/MEK/ERK cellular signaling pathways. Gene therapy for the treatment of HNSCC is also discussed.
Toward the use of precision medicine for the treatment of head and neck squamous cell carcinoma
Gong, Wang; Xiao, Yandi; Wei, Zihao; Yuan, Yao; Qiu, Min; Sun, Chongkui; Zeng, Xin; Liang, Xinhua; Feng, Mingye; Chen, Qianming
2017-01-01
Precision medicine is a new strategy that aims at preventing and treating human diseases by focusing on individual variations in people's genes, environment and lifestyle. Precision medicine has been used for cancer diagnosis and treatment and shows evident clinical efficacy. Rapid developments in molecular biology, genetics and sequencing technologies, as well as computational technology, has enabled the establishment of “big data”, such as the Human Genome Project, which provides a basis for precision medicine. Head and neck squamous cell carcinoma (HNSCC) is an aggressive cancer with a high incidence rate and low survival rate. Current therapies are often aggressive and carry considerable side effects. Much research now indicates that precision medicine can be used for HNSCC and may achieve improved results. From this perspective, we present an overview of the current status, potential strategies, and challenges of precision medicine in HNSCC. We focus on targeted therapy based on cell the surface signaling receptors epidermal growth factor receptor (EGFR), vascular endothelial growth factor (VEGF) and human epidermal growth factor receptor-2 (HER2), and on the PI3K/AKT/mTOR, JAK/STAT3 and RAS/RAF/MEK/ERK cellular signaling pathways. Gene therapy for the treatment of HNSCC is also discussed. PMID:27924064
Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H
2017-09-01
We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.
Compact Short-Pulsed Electron Linac Based Neutron Sources for Precise Nuclear Material Analysis
NASA Astrophysics Data System (ADS)
Uesaka, M.; Tagi, K.; Matsuyama, D.; Fujiwara, T.; Dobashi, K.; Yamamoto, M.; Harada, H.
2015-10-01
An X-band (11.424GHz) electron linac as a neutron source for nuclear data study for the melted fuel debris analysis and nuclear security in Fukushima is under development. Originally we developed the linac for Compton scattering X-ray source. Quantitative material analysis and forensics for nuclear security will start several years later after the safe settlement of the accident is established. For the purpose, we should now accumulate more precise nuclear data of U, Pu, etc., especially in epithermal (0.1-10 eV) neutrons. Therefore, we have decided to modify and install the linac in the core space of the experimental nuclear reactor "Yayoi" which is now under the decommission procedure. Due to the compactness of the X-band linac, an electron gun, accelerating tube and other components can be installed in a small space in the core. First we plan to perform the time-of-flight (TOF) transmission measurement for study of total cross sections of the nuclei for 0.1-10 eV energy neutrons. Therefore, if we adopt a TOF line of less than 10m, the o-pulse length of generated neutrons should be shorter than 100 ns. Electronenergy, o-pulse length, power, and neutron yield are ~30 MeV, 100 ns - 1 micros, ~0.4 kW, and ~1011 n/s (~103 n/cm2/s at samples), respectively. Optimization of the design of a neutron target (Ta, W, 238U), TOF line and neutron detector (Ce:LiCAF) of high sensitivity and fast response is underway. We are upgrading the electron gun and a buncher to realize higher current and beam power with a reasonable beam size in order to avoid damage of the neutron target. Although the neutron flux is limited in case of the X-band electron linac based source, we take advantage of its short pulse aspect and availability for nuclear data measurement with a short TOF system. First, we form a tentative configuration in the current experimental room for Compton scattering in 2014. Then, after the decommissioning has been finished, we move it to the "Yayoi" room and perform the operation and measurement.
NASA Astrophysics Data System (ADS)
Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie
The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, X.; Yang, X.; Xie, G.
2018-04-01
The image quality of the surveying camera will affect the stereoscopic positioning accuracy of the remote sensing satellite. The key factors closely related to the image quality are Modulation Transfer Function(MTF),Signal to Noise Ratio(SNR) and Quantization Bits(QB). In "Mapping Satellite-1" image as the background, research the effect of positioning precision about the image quality in no ground controlled conditions, and evaluate the quantitative relationship with the positioning precision. At last verify the validity of the experimental results by simulating three factors of the degraded data on orbit, and counting the number of matching points, the mismatch rate, and the matching residuals of the degraded data. The reason for the variety of the positioning precision was analyzed.
Pretorius, Etheresia
2017-01-01
The latest statistics from the 2016 heart disease and stroke statistics update shows that cardiovascular disease is the leading global cause of death, currently accounting for more than 17.3 million deaths per year. Type II diabetes is also on the rise with out-of-control numbers. To address these pandemics, we need to treat patients using an individualized patient care approach, but simultaneously gather data to support the precision medicine initiative. Last year the NIH announced the precision medicine initiative to generate novel knowledge regarding diseases, with a near-term focus on cancers, followed by a longer-term aim, applicable to a whole range of health applications and diseases. The focus of this paper is to suggest a combined effort between the latest precision medicine initiative, researchers and clinicians; whereby novel techniques could immediately make a difference in patient care, but long-term add to knowledge for use in precision medicine. We discuss the intricate relationship between individualized patient care and precision medicine and the current thoughts regarding which data is actually suitable for the precision medicine data gathering. The uses of viscoelastic techniques in precision medicine are discussed and how these techniques might give novel perspectives on the success of treatment regimes of cardiovascular patients are explored. Thrombo-embolic stroke, rheumathoid arthritis and type II diabetes are used as examples of diseases where precision medicine and a patient-orientated approach can possibly be implemented. In conclusion it is suggested that if all role players work together by embracing a new way of thought in treating and managing cardiovascular disease and diabetes will we be able to adequately address these out-ofcontrol conditions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Xia, Yi
Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in clinical applications. It has been demonstrated that the QUS has similar performance for in vivo bone density measurement compared to current gold-standard method, i.e., DXA, while additional information are obtained by the QUS for predicting fracture risk by monitoring of bone's quality. The developed QUS imaging technique can be used to assess bone's quantity and quality with improved accuracy and precision.
[Navigated drilling for femoral head necrosis. Experimental and clinical results].
Beckmann, J; Tingart, M; Perlick, L; Lüring, C; Grifka, J; Anders, S
2007-05-01
In the early stages of osteonecrosis of the femoral head, core decompression by exact drilling into the ischemic areas can reduce pain and achieve reperfusion. Using computer aided surgery, the precision of the drilling can be improved while simultaneously lowering radiation exposure time for both staff and patients. We describe the experimental and clinical results of drilling under the guidance of the fluoroscopically-based VectorVision navigation system (BrainLAB, Munich, Germany). A total of 70 sawbones were prepared mimicking an osteonecrosis of the femoral head. In two experimental models, bone only and obesity, as well as in a clinical setting involving ten patients with osteonecrosis of the femoral head, the precision and the duration of radiation exposure were compared between the VectorVision system and conventional drilling. No target was missed. For both models, there was a statistically significant difference in terms of the precision, the number of drilling corrections as well as the radiation exposure time. The average distance to the desired midpoint of the lesion of both models was 0.48 mm for navigated drilling and 1.06 mm for conventional drilling, the average drilling corrections were 0.175 and 2.1, and the radiation exposure time less than 1 s and 3.6 s, respectively. In the clinical setting, the reduction of radiation exposure (below 1 s for navigation compared to 56 s for the conventional technique) as well as of drilling corrections (0.2 compared to 3.4) was also significant. Computer guided drilling using the fluoroscopically based VectorVision navigation system shows a clearly improved precision with a enormous simultaneous reduction in radiation exposure. It is therefore recommended for clinical routine.
Bragdon, Charles R; Malchau, Henrik; Yuan, Xunhua; Perinchief, Rebecca; Kärrholm, Johan; Börlin, Niclas; Estok, Daniel M; Harris, William H
2002-07-01
The purpose of this study was to develop and test a phantom model based on actual total hip replacement (THR) components to simulate the true penetration of the femoral head resulting from polyethylene wear. This model was used to study both the accuracy and the precision of radiostereometric analysis, RSA, in measuring wear. We also used this model to evaluate optimum tantalum bead configuration for this particular cup design when used in a clinical setting. A physical model of a total hip replacement (a phantom) was constructed which could simulate progressive, three-dimensional (3-D) penetration of the femoral head into the polyethylene component of a THR. Using a coordinate measuring machine (CMM) the positioning of the femoral head using the phantom was measured to be accurate to within 7 microm. The accuracy and precision of an RSA analysis system was determined from five repeat examinations of the phantom using various experimental set-ups of the phantom. The accuracy of the radiostereometric analysis, in this optimal experimental set-up studied was 33 microm for the medial direction, 22 microm for the superior direction, 86 microm for the posterior direction and 55 microm for the resultant 3-D vector length. The corresponding precision at the 95% confidence interval of the test results for repositioning the phantom five times, measured 8.4 microm for the medial direction, 5.5 microm for the superior direction, 16.0 microm for the posterior direction, and 13.5 microm for the resultant 3-D vector length. This in vitro model is proposed as a useful tool for developing a standard for the evaluation of radiostereometric and other radiographic methods used to measure in vivo wear.
Instrumentation enabling study of plant physiological response to elevated night temperature
Mohammed, Abdul R; Tarpley, Lee
2009-01-01
Background Global climate warming can affect functioning of crops and plants in the natural environment. In order to study the effects of global warming, a method for applying a controlled heating treatment to plant canopies in the open field or in the greenhouse is needed that can accept either square wave application of elevated temperature or a complex prescribed diurnal or seasonal temperature regime. The current options are limited in their accuracy, precision, reliability, mobility or cost and scalability. Results The described system uses overhead infrared heaters that are relatively inexpensive and are accurate and precise in rapidly controlling the temperature. Remote computer-based data acquisition and control via the internet provides the ability to use complex temperature regimes and real-time monitoring. Due to its easy mobility, the heating system can randomly be allotted in the open field or in the greenhouse within the experimental setup. The apparatus has been successfully applied to study the response of rice to high night temperatures. Air temperatures were maintained within the set points ± 0.5°C. The incorporation of the combination of air-situated thermocouples, autotuned proportional integrative derivative temperature controllers and phase angled fired silicon controlled rectifier power controllers provides very fast proportional heating action (i.e. 9 ms time base), which avoids prolonged or intense heating of the plant material. Conclusion The described infrared heating system meets the utilitarian requirements of a heating system for plant physiology studies in that the elevated temperature can be accurately, precisely, and reliably controlled with minimal perturbation of other environmental factors. PMID:19519906
NASA Astrophysics Data System (ADS)
Zhou, Wei; Hou, Yun; Gao, Yan Qing; Zhang, Leibo; Huang, Zhi Ming
2011-08-01
As a typical thermal sensitive material, Mn1.56Co0.96Ni0.48O4 (MCN) has achieved widely applications in uncooled bolometer. In this paper, we report that a large increase in electrical conductivity of MCN is obtained with moderate electric-field strengths (E~103V/cm) applied at room temperature (about 300K). Great enhancement in the responsivity is observed when operating with a proper electric bias field, which corresponds to a threshold voltage VTh. MCN bulk materials are prepared by using the sintering method. Micro MCN detector is fabricated by scribing the bulk material into pieces sized 200×100×10μm. The detector is clinged to an Al2O3 substrate with some electrical insulated epoxy glue which is mounted onto a Cu sink. The surrounding temperature is controlled precisely by a temperature controller with a precision of 1mK. Voltage-current characteristics at 270-330K are carefully examined. Different sweeping speeds of the bias-voltage are applied in different orders so as to find out a proper scanning rate, in which the electrical measurement is proceeded in a state of quasi-thermal equilibrium. According to quasi-thermal equilibrium and the time dependent nominal D.C. power, the temperature increase during the measurement is estimated. The conduction mechanism can be well explained with small polaron theory. Empirical equations are used to describe the thermal dynamic process in the pulsed mode, and the process is also simply simulated via numerical calculations. The experimental results and simulation works will be of some referential value to future studies in uncooled microbolometer made in transition metal oxides.
Modified pressure loss model for T-junctions of engine exhaust manifold
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Lu, Xiaolu; Cui, Yi; Deng, Kangyao
2014-11-01
The T-junction model of engine exhaust manifolds significantly influences the simulation precision of the pressure wave and mass flow rate in the intake and exhaust manifolds of diesel engines. Current studies have focused on constant pressure models, constant static pressure models and pressure loss models. However, low model precision is a common disadvantage when simulating engine exhaust manifolds, particularly for turbocharged systems. To study the performance of junction flow, a cold wind tunnel experiment with high velocities at the junction of a diesel exhaust manifold is performed, and the variation in the pressure loss in the T-junction under different flow conditions is obtained. Despite the trend of the calculated total pressure loss coefficient, which is obtained by using the original pressure loss model and is the same as that obtained from the experimental results, large differences exist between the calculated and experimental values. Furthermore, the deviation becomes larger as the flow velocity increases. By improving the Vazsonyi formula considering the flow velocity and introducing the distribution function, a modified pressure loss model is established, which is suitable for a higher velocity range. Then, the new model is adopted to solve one-dimensional, unsteady flow in a D6114 turbocharged diesel engine. The calculated values are compared with the measured data, and the result shows that the simulation accuracy of the pressure wave before the turbine is improved by 4.3% with the modified pressure loss model because gas compressibility is considered when the flow velocities are high. The research results provide valuable information for further junction flow research, particularly the correction of the boundary condition in one-dimensional simulation models.
Hefnawy, Mohamed M; Sultan, Maha A; Al-Johar, Haya I; Kassem, Mohamed G; Aboul-Enein, Hassan Y
2012-01-01
Multiple response simultaneous optimization employing Derringer's desirability function was used for the development of a capillary electrophoresis method for the simultaneous determination of rosiglitazone (RSG) and glimepiride (GLM) in plasma and formulations. Twenty experiments, taking the two resolutions, the analysis time, and the capillary current as the responses with three important factors--buffer morality, volte and column temperature--were used to design mathematical models. The experimental responses were fitted into a second order polynomial and the six responses were simultaneously optimized to predict the optimum conditions for the effective separation of the studied compounds. The separation was carried out by using capillary zone electrophoresis (CZE) with a silica capillary column and diode array detector at 210 nm. The optimum assay conditions were 52 mmol l⁻¹ phosphate buffer, pH 7, and voltage of 22 kV at 29 °C. The method showed good agreement between the experimental data and predictive value throughout the studied parameter space. The assay limit of detection was 0.02 µg ml⁻¹ and the effective working range at relative standard deviation (RSD) of ≤ 5% was 0.05-16 µg ml⁻¹ (r = 0.999) for both drugs. Analytical recoveries of the studied drugs from spiked plasma were 97.2-101.9 ± 0.31-3.0%. The precision of the assay was satisfactory; RSD was 1.07 and 1.14 for intra- and inter-assay precision, respectively. The proposed method has a great value in routine analysis of RSG and GLM for its therapeutic monitoring and pharmacokinetic studies. Copyright © 2011 John Wiley & Sons, Ltd.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
Two-way sequential time synchronization: Preliminary results from the SIRIO-1 experiment
NASA Technical Reports Server (NTRS)
Detoma, E.; Leschiutta, S.
1981-01-01
A two-way time synchronization experiment performed in the spring of 1979 and 1980 via the Italian SIRIO-1 experimental telecommunications satellite is described. The experiment was designed and implemented to precisely monitor the satellite motion and to evaluate the possibility of performing a high precision, two-way time synchronization using a single communication channel, time-shared between the participating sites. Results show that the precision of the time synchronization is between 1 and 5 ns, while the evaluation and correction of the satellite motion effect was performed with an accuracy of a few nanoseconds or better over a time interval from 1 up to 20 seconds.
Microfluidic proportional flow controller
Prentice-Mott, Harrison; Toner, Mehmet; Irimia, Daniel
2011-01-01
Precise flow control in microfluidic chips is important for many biochemical assays and experiments at microscale. While several technologies for controlling fluid flow have been implemented either on- or off-chip, these can provide either high-speed or high-precision control, but seldom could accomplish both at the same time. Here we describe a new on-chip, pneumatically activated flow controller that allows for fast and precise control of the flow rate through a microfluidic channel. Experimental results show that the new proportional flow controllers exhibited a response time of approximately 250 ms, while our numerical simulations suggest that faster actuation down to approximately 50 ms could be achieved with alternative actuation schemes. PMID:21874096
The 12C(α ,γ )16O reaction and its implications for stellar helium burning
NASA Astrophysics Data System (ADS)
deBoer, R. J.; Görres, J.; Wiescher, M.; Azuma, R. E.; Best, A.; Brune, C. R.; Fields, C. E.; Jones, S.; Pignatari, M.; Sayre, D.; Smith, K.; Timmes, F. X.; Uberseder, E.
2017-07-01
The creation of carbon and oxygen in our Universe is one of the forefront questions in nuclear astrophysics. The determination of the abundance of these elements is key to our understanding of both the formation of life on Earth and to the life cycles of stars. While nearly all models of different nucleosynthesis environments are affected by the production of carbon and oxygen, a key ingredient, the precise determination of the reaction rate of 12C(α ,γ )16O, has long remained elusive. This is owed to the reaction's inaccessibility, both experimentally and theoretically. Nuclear theory has struggled to calculate this reaction rate because the cross section is produced through different underlying nuclear mechanisms. Isospin selection rules suppress the E 1 component of the ground state cross section, creating a unique situation where the E 1 and E 2 contributions are of nearly equal amplitudes. Experimentally there have also been great challenges. Measurements have been pushed to the limits of state-of-the-art techniques, often developed for just these measurements. The data have been plagued by uncharacterized uncertainties, often the result of the novel measurement techniques that have made the different results challenging to reconcile. However, the situation has markedly improved in recent years, and the desired level of uncertainty ≈10 % may be in sight. In this review the current understanding of this critical reaction is summarized. The emphasis is placed primarily on the experimental work and interpretation of the reaction data, but discussions of the theory and astrophysics are also pursued. The main goal is to summarize and clarify the current understanding of the reaction and then point the way forward to an improved determination of the reaction rate.
The C 12 ( α , γ ) O 16 reaction and its implications for stellar helium burning
DOE Office of Scientific and Technical Information (OSTI.GOV)
deBoer, R. J.; Gorres, J.; Wiescher, M.
The creation of carbon and oxygen in our Universe is one of the forefront questions in nuclear astrophysics. The determination of the abundance of these elements is key to our understanding of both the formation of life on Earth and to the life cycles of stars. While nearly all models of different nucleosynthesis environments are affected by the production of carbon and oxygen, a key ingredient, the precise determination of the reaction rate of 12C (α, γ) 16O , has long remained elusive. This is owed to the reaction’s inaccessibility, both experimentally and theoretically. Nuclear theory has struggled to calculatemore » this reaction rate because the cross section is produced through different underlying nuclear mechanisms. Isospin selection rules suppress the E 1 component of the ground state cross section, creating a unique situation where the E 1 and E 2 contributions are of nearly equal amplitudes. Experimentally there have also been great challenges. Measurements have been pushed to the limits of state-of-the-art techniques, often developed for just these measurements. The data have been plagued by uncharacterized uncertainties, often the result of the novel measurement techniques that have made the different results challenging to reconcile. However, the situation has markedly improved in recent years, and the desired level of uncertainty ≈ 10 % may be in sight. In this review the current understanding of this critical reaction is summarized. The emphasis is placed primarily on the experimental work and interpretation of the reaction data, but discussions of the theory and astrophysics are also pursued. In conclusion, the main goal is to summarize and clarify the current understanding of the reaction and then point the way forward to an improved determination of the reaction rate.« less
The C 12 ( α , γ ) O 16 reaction and its implications for stellar helium burning
deBoer, R. J.; Gorres, J.; Wiescher, M.; ...
2017-09-07
The creation of carbon and oxygen in our Universe is one of the forefront questions in nuclear astrophysics. The determination of the abundance of these elements is key to our understanding of both the formation of life on Earth and to the life cycles of stars. While nearly all models of different nucleosynthesis environments are affected by the production of carbon and oxygen, a key ingredient, the precise determination of the reaction rate of 12C (α, γ) 16O , has long remained elusive. This is owed to the reaction’s inaccessibility, both experimentally and theoretically. Nuclear theory has struggled to calculatemore » this reaction rate because the cross section is produced through different underlying nuclear mechanisms. Isospin selection rules suppress the E 1 component of the ground state cross section, creating a unique situation where the E 1 and E 2 contributions are of nearly equal amplitudes. Experimentally there have also been great challenges. Measurements have been pushed to the limits of state-of-the-art techniques, often developed for just these measurements. The data have been plagued by uncharacterized uncertainties, often the result of the novel measurement techniques that have made the different results challenging to reconcile. However, the situation has markedly improved in recent years, and the desired level of uncertainty ≈ 10 % may be in sight. In this review the current understanding of this critical reaction is summarized. The emphasis is placed primarily on the experimental work and interpretation of the reaction data, but discussions of the theory and astrophysics are also pursued. In conclusion, the main goal is to summarize and clarify the current understanding of the reaction and then point the way forward to an improved determination of the reaction rate.« less
El-Amrawy, Fatema
2015-01-01
Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039
El-Amrawy, Fatema; Nounou, Mohamed Ismail
2015-10-01
The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure.
Li, Tingting; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo
2017-09-07
This study aimed to investigate the status of internal quality control (IQC) for cardiac biomarkers from 2011 to 2016 so that we can have overall knowledge of the precision level of measurements in China and set appropriate precision specifications. Internal quality control data of cardiac biomarkers, including creatinine kinase MB (CK-MB) (μg/L), CK-MB(U/L), myoglobin (Mb), cardiac troponin I (cTnI), cardiac troponin T (cTnT), and homocysteines (HCY), were collected by a web-based external quality assessment (EQA) system. Percentages of laboratories meeting five precision quality specifications for current coefficient of variations (CVs) were calculated. Then, appropriate precision specifications were chosen for these six analytes. Finally, the CVs and IQC practice were further analyzed with different grouping methods. The current CVs remained nearly constant for 6 years. cTnT had the highest pass rates every year against five specifications, whereas HCY had the lowest pass rates. Overall, most analytes had a satisfactory performance (pass rates >80%), except for HCY, if one-third TEa or the minimum specification were employed. When the optimal specification was applied, the performance of most analytes was frustrating (pass rates < 60%) except for cTnT. The appropriate precision specifications of Mb, cTnI, cTnT and HCY were set as current CVs less than 9.20%, 9.90%, 7.50%, 10.54%, 7.63%, and 6.67%, respectively. The data of IQC practices indicated wide variation and substantial progress. The precision performance of cTnT was already satisfying, while the other five analytes, especially HCY, were still frustrating; thus, ongoing investigation and continuous improvement for IQC are still needed. © 2017 Wiley Periodicals, Inc.
Critical Care and Personalized or Precision Medicine: Who needs whom?
Sugeir, Shihab; Naylor, Stephen
2018-02-01
The current paradigm of modern healthcare is a reactive response to patient symptoms, subsequent diagnosis and corresponding treatment of the specific disease(s). This approach is predicated on methodologies first espoused by the Cnidean School of Medicine approximately 2500years ago. More recently escalating healthcare costs and relatively poor disease treatment outcomes have fermented a rethink in how we carry out medical practices. This has led to the emergence of "P-Medicine" in the form of Personalized and Precision Medicine. The terms are used interchangeably, but in fact there are significant differences in the way they are implemented. The former relies on an "N-of-1" model whereas the latter uses a "1-in-N" model. Personalized Medicine is still in a fledgling and evolutionary phase and there has been much debate over its current status and future prospects. A confounding factor has been the sudden development of Precision Medicine, which has currently captured the imagination of policymakers responsible for modern healthcare systems. There is some confusion over the terms Personalized versus Precision Medicine. Here we attempt to define the key differences and working definitions of each P-Medicine approach, as well as a taxonomic relationship tree. Finally, we discuss the impact of Personalized and Precision Medicine on the practice of Critical Care Medicine (CCM). Practitioners of CCM have been participating in Personalized Medicine unknowingly as it takes the protocols of sepsis, mechanical ventilation, and daily awakening trials and applies it to each individual patient. However, the immediate next step for CCM should be an active development of Precision Medicine. This developmental process should break down the silos of modern medicine and create a multidisciplinary approach between clinicians and basic/translational scientists. Copyright © 2017 Elsevier Inc. All rights reserved.
Sources of uncertainty in estimating stream solute export from headwater catchments at three sites
Ruth D. Yanai; Naoko Tokuchi; John L. Campbell; Mark B. Green; Eiji Matsuzaki; Stephanie N. Laseter; Cindi L. Brown; Amey S. Bailey; Pilar Lyons; Carrie R. Levine; Donald C. Buso; Gene E. Likens; Jennifer D. Knoepp; Keitaro Fukushima
2015-01-01
Uncertainty in the estimation of hydrologic export of solutes has never been fully evaluated at the scale of a small-watershed ecosystem. We used data from the Gomadansan Experimental Forest, Japan, Hubbard Brook Experimental Forest, USA, and Coweeta Hydrologic Laboratory, USA, to evaluate many sources of uncertainty, including the precision and accuracy of...
Experimental Evaluation of the Drag Coefficient of Water Rockets by a Simple Free-Fall Test
ERIC Educational Resources Information Center
Barrio-Perotti, R.; Blanco-Marigorta, E. Arguelles-Diaz, K.; Fernandez-Oro, J.
2009-01-01
The flight trajectory of a water rocket can be reasonably calculated if the magnitude of the drag coefficient is known. The experimental determination of this coefficient with enough precision is usually quite difficult, but in this paper we propose a simple free-fall experiment for undergraduate students to reasonably estimate the drag…
An application of eddy current damping effect on single point diamond turning of titanium alloys
NASA Astrophysics Data System (ADS)
Yip, W. S.; To, S.
2017-11-01
Titanium alloys Ti6Al4V (TC4) have been popularly applied in many industries. They have superior material properties including an excellent strength-to-weight ratio and corrosion resistance. However, they are regarded as difficult to cut materials; serious tool wear, a high level of cutting vibration and low surface integrity are always involved in machining processes especially in ultra-precision machining (UPM). In this paper, a novel hybrid machining technology using an eddy current damping effect is firstly introduced in UPM to suppress machining vibration and improve the machining performance of titanium alloys. A magnetic field was superimposed on samples during single point diamond turning (SPDT) by exposing the samples in between two permanent magnets. When the titanium alloys were rotated within a magnetic field in the SPDT, an eddy current was generated through a stationary magnetic field inside the titanium alloys. An eddy current generated its own magnetic field with the opposite direction of the external magnetic field leading a repulsive force, compensating for the machining vibration induced by the turning process. The experimental results showed a remarkable improvement in cutting force variation, a significant reduction in adhesive tool wear and an extreme long chip formation in comparison to normal SPDT of titanium alloys, suggesting the enhancement of the machinability of titanium alloys using an eddy current damping effect. An eddy current damping effect was firstly introduced in the area of UPM to deliver the results of outstanding machining performance.
NASA Astrophysics Data System (ADS)
Chianese, Marco; Di Bari, Pasquale
2018-05-01
We confront recent experimental results on neutrino mixing parameters with the requirements from strong thermal SO(10)-inspired leptogenesis, where the asymmetry is produced from next-to-lightest right-handed neutrinos N 2 independently of the initial conditions. There is a nice agreement with latest global analyses supporting sin δ < 0 and normal ordering at ˜ 95% C.L. On the other hand, the more stringent experimental lower bound on the atmospheric mixing angle starts to corner strong thermal SO(10)-inspired leptogenesis. Prompted and encouraged by this rapid experimental advance, we obtain a precise determination of the allowed region in the plane δ versus θ 23. We confirm that for the benchmark case α 2 ≡ m D2 /m charm = 5 , where m D2 is the intermediate neutrino Dirac mass setting the N 2 mass, and initial pre-existing asymmetry N B - L p,i = 10- 3, the bulk of solutions lies in the first octant. Though most of the solutions are found outside the 95% C.L. experimental region, there is still a big allowed fraction that does not require a too fine-tuned choice of the Majorana phases so that the neutrinoless double beta decay effective neutrino mass allowed range is still m ee ≃ [10 , 30] meV. We also show how the constraints depend on N B - L p,i and α 2. In particular, we show that the current best fit, ( θ 23 , δ) ≃ (47° , -130°), can be reproduced for N B - L p,i = 10- 3 and α 2 = 6. Such large values for α 2 have been recently obtained in a few realistic fits within SO(10)-inspired models. Finally, we also obtain that current neutrino data rule out N B - L p,i ≳ 0.1 for α 2 ≲ 4.7.
Precision engineering: an evolutionary perspective.
Evans, Chris J
2012-08-28
Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.
Stach, Thomas; Anselmi, Chiara
2015-12-23
Understanding the evolution of divergent developmental trajectories requires detailed comparisons of embryologies at appropriate levels. Cell lineages, the accurate visualization of cleavage patterns, tissue fate restrictions, and morphogenetic movements that occur during the development of individual embryos are currently available for few disparate animal taxa, encumbering evolutionarily meaningful comparisons. Tunicates, considered to be close relatives of vertebrates, are marine invertebrates whose fossil record dates back to 525 million years ago. Life-history strategies across this subphylum are radically different, and include biphasic ascidians with free swimming larvae and a sessile adult stage, and the holoplanktonic larvaceans. Despite considerable progress, notably on the molecular level, the exact extent of evolutionary conservation and innovation during embryology remain obscure. Here, using the innovative technique of bifocal 4D-microscopy, we demonstrate exactly which characteristics in the cell lineages of the ascidian Phallusia mammillata and the larvacean Oikopleura dioica were conserved and which were altered during evolution. Our accurate cell lineage trees in combination with detailed three-dimensional representations clearly identify conserved correspondence in relative cell position, cell identity, and fate restriction in several lines from all prospective larval tissues. At the same time, we precisely pinpoint differences observable at all levels of development. These differences comprise fate restrictions, tissue types, complex morphogenetic movement patterns, numerous cases of heterochronous acceleration in the larvacean embryo, and differences in bilateral symmetry. Our results demonstrate in extraordinary detail the multitude of developmental levels amenable to evolutionary innovation, including subtle changes in the timing of fate restrictions as well as dramatic alterations in complex morphogenetic movements. We anticipate that the precise spatial and temporal cell lineage data will moreover serve as a high-precision guide to devise experimental investigations of other levels, such as molecular interactions between cells or changes in gene expression underlying the documented structural evolutionary changes. Finally, the quantitative amount of digital high-precision morphological data will enable and necessitate software-based similarity assessments as the basis of homology hypotheses.
Precision medicine for psychopharmacology: a general introduction.
Shin, Cheolmin; Han, Changsu; Pae, Chi-Un; Patkar, Ashwin A
2016-07-01
Precision medicine is an emerging medical model that can provide accurate diagnoses and tailored therapeutic strategies for patients based on data pertaining to genes, microbiomes, environment, family history and lifestyle. Here, we provide basic information about precision medicine and newly introduced concepts, such as the precision medicine ecosystem and big data processing, and omics technologies including pharmacogenomics, pharamacometabolomics, pharmacoproteomics, pharmacoepigenomics, connectomics and exposomics. The authors review the current state of omics in psychiatry and the future direction of psychopharmacology as it moves towards precision medicine. Expert commentary: Advances in precision medicine have been facilitated by achievements in multiple fields, including large-scale biological databases, powerful methods for characterizing patients (such as genomics, proteomics, metabolomics, diverse cellular assays, and even social networks and mobile health technologies), and computer-based tools for analyzing large amounts of data.
Spectroscopic Factors from the Single Neutron Pickup ^64Zn(d,t)
NASA Astrophysics Data System (ADS)
Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Towner, I. S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.
2010-11-01
A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed ° decay of ^62Ga will be discussed.
Ion traps for precision experiments at rare-isotope-beam facilities
NASA Astrophysics Data System (ADS)
Kwiatkowski, Anna
2016-09-01
Ion traps first entered experimental nuclear physics when the ISOLTRAP team demonstrated Penning trap mass spectrometry of radionuclides. From then on, the demand for ion traps has grown at radioactive-ion-beam (RIB) facilities since beams can be tailored for the desired experiment. Ion traps have been deployed for beam preparation, from bunching (thereby allowing time coincidences) to beam purification. Isomerically pure beams needed for nuclear-structure investigations can be prepared for trap-assisted or in-trap decay spectroscopy. The latter permits studies of highly charged ions for stellar evolution, which would be impossible with traditional experimental nuclear-physics methods. Moreover, the textbook-like conditions and advanced ion manipulation - even of a single ion - permit high-precision experiments. Consequently, the most accurate and precise mass measurements are now performed in Penning traps. After a brief introduction to ion trapping, I will focus on examples which showcase the versatility and utility of the technique at RIB facilities. I will demonstrate how this atomic-physics technique has been integrated into nuclear science, accelerator physics, and chemistry. DOE.
Feedforward hysteresis compensation in trajectory control of piezoelectrically-driven nanostagers
NASA Astrophysics Data System (ADS)
Bashash, Saeid; Jalili, Nader
2006-03-01
Complex structural nonlinearities of piezoelectric materials drastically degrade their performance in variety of micro- and nano-positioning applications. From the precision positioning and control perspective, the multi-path time-history dependent hysteresis phenomenon is the most concerned nonlinearity in piezoelectric actuators to be analyzed. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligent properties of hysteresis with the effects of non-local memories are discussed. Through performing a set of experiments on a piezoelectrically-driven nanostager with high resolution capacitive position sensor, it is shown that for the precise prediction of hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the system everpresent nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect if memory units are sufficiently chosen for the inverse model.
Screening for Learning and Memory Mutations: A New Approach.
Gallistel, C R; King, A P; Daniel, A M; Freestone, D; Papachristos, E B; Balci, F; Kheifets, A; Zhang, J; Su, X; Schiff, G; Kourtev, H
2010-01-30
We describe a fully automated, live-in 24/7 test environment, with experimental protocols that measure the accuracy and precision with which mice match the ratio of their expected visit durations to the ratio of the incomes obtained from two hoppers, the progress of instrumental and classical conditioning (trials-to-acquisition), the accuracy and precision of interval timing, the effect of relative probability on the choice of a timed departure target, and the accuracy and precision of memory for the times of day at which food is available. The system is compact; it obviates the handling of the mice during testing; it requires negligible amounts of experimenter/technician time; and it delivers clear and extensive results from 3 protocols within a total of 7-9 days after the mice are placed in the test environment. Only a single 24-hour period is required for the completion of first protocol (the matching protocol), which is strong test of temporal and spatial estimation and memory mechanisms. Thus, the system permits the extensive screening of many mice in a short period of time and in limited space. The software is publicly available.
Bianchi, F; Careri, M; Maffini, M; Mangia, A; Mucchino, C
2003-01-01
A sensitive method for the simultaneous determination of (7)Li, (27)Al and (56)Fe by cold plasma ICP-MS was developed and validated. Experimental design was used to investigate the effects of torch position, torch power, lens 2 voltage, and coolant flow. Regression models and desirability functions were applied to find the experimental conditions providing the highest global sensitivity in a multi-elemental analysis. Validation was performed in terms of limits of detection (LOD), limits of quantitation (LOQ), linearity and precision. LODs were 1.4 and 159 ng L(-1) for (7)Li and (56)Fe, respectively; the highest LOD found being that for (27)Al (425 ng L(-1)). Linear ranges of 5 orders of magnitude for Li and 3 orders for Fe were statistically verified for each compound. Precision was evaluated by testing two concentration levels, and good results in terms of both intra-day repeatability and intermediate precision were obtained. RSD values lower than 4.8% at the lowest concentration level were calculated for intra-day repeatability. Commercially available soft drinks and alcoholic beverages contained in different packaging materials (TetraPack, polyethylene terephthalate (PET), commercial cans and glass) were analysed, and all the analytes were detected and quantitated. Copyright 2002 John Wiley & Sons, Ltd.
Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád
2018-06-01
A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.
NASA Technical Reports Server (NTRS)
Paillat, O.; Wasserburg, G. J.
1993-01-01
Experimental studies of self-diffusion isotopes in silicate melts often have quite large uncertainties when comparing one study to another. We designed an experiment in order to improve the precision of the results by simultaneously studying several elements (Mg, Ca, Sr, Ba) during the same experiment thereby greatly reducing the relative experimental uncertainties. Results show that the uncertainties on the diffusion coefficients can be reduced to 10 percent, allowing a more reliable comparison of differences of self-diffusion coefficients of the elements. This type of experiment permits us to study precisely and simultaneously several elements with no restriction on any element. We also designed an experiment to investigate the possible effects of multicomponent diffusion during Mg self-diffusion experiments by comparing cases where the concentrations of the elements and the isotopic compositions are different. The results suggest that there are differences between the effective means of transport. This approach should allow us to investigate the importance of multicomponent diffusion in silicate melts.
Progress towards Low Energy Neutrino Spectroscopy (LENS)
NASA Astrophysics Data System (ADS)
Blackmon, Jeff
2011-10-01
The Low-Energy Neutrino Spectroscopy (LENS) experiment will precisely measure the energy spectrum of low-energy solar neutrinos via charged-current neutrino reactions on indium. LENS will test solar physics through the fundamental equality of the neutrino fluxes and the precisely known solar luminosity in photons, will probe the metallicity of the solar core through the CNO neutrino fluxes, and will test for the existence of mass-varying neutrinos. The LENS detector concept applies indium-loaded scintillator in an optically-segmented lattice geometry to achieve precise time and spatial resolution and unprecedented sensitivity for low-energy neutrino events. The LENS collaboration is currently developing a prototype, miniLENS, in the Kimballton Underground Research Facility (KURF). The miniLENS program aims to demonstrate the performance and selectivity of the technology and to benchmark Monte Carlo simulations that will guide scaling to the full LENS instrument. We will present the motivation and concept for LENS and will provide an overview of the R&D efforts currently centered around miniLENS at KURF.
Mansor, Syahir; Pfaehler, Elisabeth; Heijtel, Dennis; Lodge, Martin A; Boellaard, Ronald; Yaqub, Maqsood
2017-12-01
In longitudinal oncological and brain PET/CT studies, it is important to understand the repeatability of quantitative PET metrics in order to assess change in tracer uptake. The present studies were performed in order to assess precision as function of PET/CT system, reconstruction protocol, analysis method, scan duration (or image noise), and repositioning in the field of view. Multiple (repeated) scans have been performed using a NEMA image quality (IQ) phantom and a 3D Hoffman brain phantom filled with 18 F solutions on two systems. Studies were performed with and without randomly (< 2 cm) repositioning the phantom and all scans (12 replicates for IQ phantom and 10 replicates for Hoffman brain phantom) were performed at equal count statistics. For the NEMA IQ phantom, we studied the recovery coefficients (RC) of the maximum (SUV max ), peak (SUV peak ), and mean (SUV mean ) uptake in each sphere as a function of experimental conditions (noise level, reconstruction settings, and phantom repositioning). For the 3D Hoffman phantom, the mean activity concentration was determined within several volumes of interest and activity recovery and its precision was studied as function of experimental conditions. The impact of phantom repositioning on RC precision was mainly seen on the Philips Ingenuity PET/CT, especially in the case of smaller spheres (< 17 mm diameter, P < 0.05). This effect was much smaller for the Siemens Biograph system. When exploring SUV max , SUV peak , or SUV mean of the spheres in the NEMA IQ phantom, it was observed that precision depended on phantom repositioning, reconstruction algorithm, and scan duration, with SUV max being most and SUV peak least sensitive to phantom repositioning. For the brain phantom, regional averaged SUVs were only minimally affected by phantom repositioning (< 2 cm). The precision of quantitative PET metrics depends on the combination of reconstruction protocol, data analysis methods and scan duration (scan statistics). Moreover, precision was also affected by phantom repositioning but its impact depended on the data analysis method in combination with the reconstructed voxel size (tissue fraction effect). This study suggests that for oncological PET studies the use of SUV peak may be preferred over SUV max because SUV peak is less sensitive to patient repositioning/tumor sampling. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Role of Imaging in the Era of Precision Medicine.
Giardino, Angela; Gupta, Supriya; Olson, Emmi; Sepulveda, Karla; Lenchik, Leon; Ivanidze, Jana; Rakow-Penner, Rebecca; Patel, Midhir J; Subramaniam, Rathan M; Ganeshan, Dhakshinamoorthy
2017-05-01
Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine. Published by Elsevier Inc.
Why precision medicine is not the best route to a healthier world.
Rey-López, Juan Pablo; Sá, Thiago Herick de; Rezende, Leandro Fórnias Machado de
2018-02-05
Precision medicine has been announced as a new health revolution. The term precision implies more accuracy in healthcare and prevention of diseases, which could yield substantial cost savings. However, scientific debate about precision medicine is needed to avoid wasting economic resources and hype. In this commentary, we express the reasons why precision medicine cannot be a health revolution for population health. Advocates of precision medicine neglect the limitations of individual-centred, high-risk strategies (reduced population health impact) and the current crisis of evidence-based medicine. Overrated "precision medicine" promises may be serving vested interests, by dictating priorities in the research agenda and justifying the exorbitant healthcare expenditure in our finance-based medicine. If societies aspire to address strong risk factors for non-communicable diseases (such as air pollution, smoking, poor diets, or physical inactivity), they need less medicine and more investment in population prevention strategies.
Eddy-Current Reference Standard
NASA Technical Reports Server (NTRS)
Ambrose, H. H., Jr.
1985-01-01
Magnetic properties of metallic reference standards duplicated and stabilized for eddy-current coil measurements over long times. Concept uses precisely machined notched samples of known annealed materials as reference standards.
McClintock, Carlee S; Hettich, Robert L.
2012-01-01
Oxidative protein surface mapping has become a powerful approach for measuring the solvent accessibility of folded protein structures. A variety of techniques exist for generating the key reagent – hydroxyl radicals – for these measurements; however, these approaches range significantly in their complexity and expense of operation. This research expands upon earlier work to enhance the controllability of boron-doped diamond (BDD) electrochemistry as an easily accessible tool for producing hydroxyl radicals in order to oxidize a range of intact proteins. Efforts to modulate oxidation level while minimizing the adsorption of protein to the electrode involved the use of relatively high flow rates to reduce protein residence time inside the electrochemical flow chamber. Additionally, a different cell activation approach using variable voltage to supply a controlled current allowed us to precisely tune the extent of oxidation in a protein-dependent manner. In order to gain perspective on the level of protein adsorption onto the electrode surface, studies were conducted to monitor protein concentration during electrolysis and gauge changes in the electrode surface between cell activation events. This report demonstrates the successful use of BDD electrochemistry for greater precision in generating a target number of oxidation events upon intact proteins. PMID:23210708
Huang, Liya; Wu, Zhong; Wang, Kan
2018-06-07
The high-precision speed control of gimbal servo systems is the key to generating high-precision torque for control moment gyroscopes (CMGs) in spacecrafts. However, the control performance of gimbal servo systems may be degraded significantly by disturbances, especially a dynamic imbalance disturbance with the same frequency as the high-speed rotor. For assembled CMGs, it is very difficult to measure the rotor imbalance directly by using a dynamic balancing machine. In this paper, a gimbal disturbance observer is proposed to estimate the dynamic imbalance of the rotor assembled in the CMG. First, a third-order dynamical system is established to describe the disturbance dynamics of the gimbal servo system, in which the rotor dynamic imbalance torque along the gimbal axis and the other disturbances are modeled to be periodic and bounded, respectively. Then, the gimbal disturbance observer is designed for the third-order dynamical system by using the total disturbance as a virtual measurement. Since the virtual measurement is derived from the inverse dynamics of the gimbal servo system, the information of the rotor dynamic imbalance can be obtained indirectly only using the measurements of gimbal speed and three-phase currents. Semi-physical experimental results demonstrate the effectiveness of the observer by using a CMG simulator.
Animal and in silico models for the study of sarcomeric cardiomyopathies
Duncker, Dirk J.; Bakkers, Jeroen; Brundel, Bianca J.; Robbins, Jeff; Tardiff, Jil C.; Carrier, Lucie
2015-01-01
Over the past decade, our understanding of cardiomyopathies has improved dramatically, due to improvements in screening and detection of gene defects in the human genome as well as a variety of novel animal models (mouse, zebrafish, and drosophila) and in silico computational models. These novel experimental tools have created a platform that is highly complementary to the naturally occurring cardiomyopathies in cats and dogs that had been available for some time. A fully integrative approach, which incorporates all these modalities, is likely required for significant steps forward in understanding the molecular underpinnings and pathogenesis of cardiomyopathies. Finally, novel technologies, including CRISPR/Cas9, which have already been proved to work in zebrafish, are currently being employed to engineer sarcomeric cardiomyopathy in larger animals, including pigs and non-human primates. In the mouse, the increased speed with which these techniques can be employed to engineer precise ‘knock-in’ models that previously took years to make via multiple rounds of homologous recombination-based gene targeting promises multiple and precise models of human cardiac disease for future study. Such novel genetically engineered animal models recapitulating human sarcomeric protein defects will help bridging the gap to translate therapeutic targets from small animal and in silico models to the human patient with sarcomeric cardiomyopathy. PMID:25600962
Sonic Estimation of Elasticity via Resonance: A New Method of Assessing Hemostasis
Corey, F. Scott; Walker, William F.
2015-01-01
Uncontrolled bleeding threatens patients undergoing major surgery and in care for traumatic injury. This paper describes a novel method of diagnosing coagulation dysfunction by repeatedly measuring the shear modulus of a blood sample as it clots in vitro. Each measurement applies a high-energy ultrasound pulse to induce a shear wave within a rigid walled chamber, and then uses low energy ultrasound pulses to measure displacements associated with the resonance of that shear wave. Measured displacements are correlated with predictions from Finite Difference Time Domain (FDTD) models, with the best fit corresponding to the modulus estimate. In our current implementation each measurement requires 62.4 ms. Experimental data was analyzed using a fixed-viscosity algorithm and a free-viscosity algorithm. In experiments utilizing human blood induced to clot by exposure to kaolin, the free-viscosity algorithm quantified the shear modulus of formed clots with a worst-case precision of 2.5%. Precision was improved to 1.8% by utilizing the fixed-viscosity algorithm. Repeated measurements showed a smooth evolution from liquid blood to a firm clot with a shear modulus between 1.4 kPa and 3.3 kPa. These results show the promise of this technique for rapid, point of care assessment of coagulation. PMID:26399992
Drill wear monitoring in cortical bone drilling.
Staroveski, Tomislav; Brezak, Danko; Udiljak, Toma
2015-06-01
Medical drills are subject to intensive wear due to mechanical factors which occur during the bone drilling process, and potential thermal and chemical factors related to the sterilisation process. Intensive wear increases friction between the drill and the surrounding bone tissue, resulting in higher drilling temperatures and cutting forces. Therefore, the goal of this experimental research was to develop a drill wear classification model based on multi-sensor approach and artificial neural network algorithm. A required set of tool wear features were extracted from the following three types of signals: cutting forces, servomotor drive currents and acoustic emission. Their capacity to classify precisely one of three predefined drill wear levels has been established using a pattern recognition type of the Radial Basis Function Neural Network algorithm. Experiments were performed on a custom-made test bed system using fresh bovine bones and standard medical drills. Results have shown high classification success rate, together with the model robustness and insensitivity to variations of bone mechanical properties. Features extracted from acoustic emission and servomotor drive signals achieved the highest precision in drill wear level classification (92.8%), thus indicating their potential in the design of a new type of medical drilling machine with process monitoring capabilities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
A Signal Detection Theory Approach to Evaluating Oculometer Data Quality
NASA Technical Reports Server (NTRS)
Latorella, Kara; Lynn, William, III; Barry, John S.; Kelly, Lon; Shih, Ming-Yun
2013-01-01
Currently, data quality is described in terms of spatial and temporal accuracy and precision [Holmqvist et al. in press]. While this approach provides precise errors in pixels, or visual angle, often experiments are more concerned with whether subjects'points of gaze can be said to be reliable with respect to experimentally-relevant areas of interest. This paper proposes a method to characterize oculometer data quality using Signal Detection Theory (SDT) [Marcum 1947]. SDT classification results in four cases: Hit (correct report of a signal), Miss (failure to report a ), False Alarm (a signal falsely reported), Correct Reject (absence of a signal correctly reported). A technique is proposed where subjects' are directed to look at points in and outside of an AOI, and the resulting Points of Gaze (POG) are classified as Hits (points known to be internal to an AOI are classified as such), Misses (AOI points are not indicated as such), False Alarms (points external to AOIs are indicated as in the AOI), or Correct Rejects (points external to the AOI are indicated as such). SDT metrics describe performance in terms of discriminability, sensitivity, and specificity. This paper presentation will provide the procedure for conducting this assessment and an example of data collected for AOIs in a simulated flightdeck environment.
Satellite Test of the Equivalence Principle as a Probe of Modified Newtonian Dynamics.
Pereira, Jonas P; Overduin, James M; Poyneer, Alexander J
2016-08-12
The proposed satellite test of the equivalence principle (STEP) will detect possible violations of the weak equivalence principle by measuring relative accelerations between test masses of different composition with a precision of one part in 10^{18}. A serendipitous by-product of the experimental design is that the absolute or common-mode acceleration of the test masses is also measured to high precision as they oscillate along a common axis under the influence of restoring forces produced by the position sensor currents, which in drag-free mode lead to Newtonian accelerations as small as 10^{-14} g. This is deep inside the low-acceleration regime where modified Newtonian dynamics (MOND) diverges strongly from the Newtonian limit of general relativity. We show that MOND theories (including those based on the widely used "n family" of interpolating functions as well as the covariant tensor-vector-scalar formulation) predict an easily detectable increase in the frequency of oscillations of the STEP test masses if the strong equivalence principle holds. If it does not hold, MOND predicts a cumulative increase in oscillation amplitude which is also detectable. STEP thus provides a new and potentially decisive test of Newton's law of inertia, as well as the equivalence principle in both its strong and weak forms.
Quantum Control of Spins in Diamond for Nanoscale Magnetic Sensing and Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutt, Gurudev
Our research activities during the grant period focused on the challenges of highly accurate and precise magnetometry and magnetic imaging using quantum spins inside diamond. Our work has resulted in 6 papers published in peer-reviewed journals, with two more currently under consideration by referees. We showed that through the use of novel phase estimation algorithms inspired by quantum information science we can carry out accurate and high dynamic range DC magnetometry as well as lock-in detection of oscillating (AC) magnetic fields. We investigated the geometric phase as a route to higher precision quantum information and magnetic sensing applications, and probedmore » the experimental limits to the fidelity of such geometric phase gates. We also demonstrated that there is a spin dependent signal in the charge state flipping of the NV defect center in diamond, which could potentialy be useful for higher fidelity spin readout at room temperature. Some of these projects have now led to further investigation in our lab on multi-photon spectroscopy (manuscript in preparation), and plasmonic guiding of light in metal nanowires (manuscript available on arxiv). In addition, several invited talks were given by the PI, and conference presentations were given by the graduate students and postdocs.« less
Design and algorithm research of high precision airborne infrared touch screen
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
EVAcon: a protein contact prediction evaluation service
Graña, Osvaldo; Eyrich, Volker A.; Pazos, Florencio; Rost, Burkhard; Valencia, Alfonso
2005-01-01
Here we introduce EVAcon, an automated web service that evaluates the performance of contact prediction servers. Currently, EVAcon is monitoring nine servers, four of which are specialized in contact prediction and five are general structure prediction servers. Results are compared for all newly determined experimental structures deposited into PDB (∼5–50 per week). EVAcon allows for a precise comparison of the results based on a system of common protein subsets and the commonly accepted evaluation criteria that are also used in the corresponding category of the CASP assessment. EVAcon is a new service added to the functionality of the EVA system for the continuous evaluation of protein structure prediction servers. The new service is accesible from any of the three EVA mirrors: PDG (CNB-CSIC, Madrid) (); CUBIC (Columbia University, NYC) (); and Sali Lab (UCSF, San Francisco) (). PMID:15980486
NASA Astrophysics Data System (ADS)
KLOE; KLOE-2 Collaborations; Babusci, D.; Badoni, D.; Balwierz-Pytko, I.; Bencivenni, G.; Bini, C.; Bloise, C.; Bossi, F.; Branchini, P.; Budano, A.; Caldeira Balkeståhl, L.; Capon, G.; Ceradini, F.; Ciambrone, P.; Curciarello, F.; Czerwiński, E.; Dané, E.; De Leo, V.; De Lucia, E.; De Robertis, G.; De Santis, A.; De Simone, P.; Di Domenico, A.; Di Donato, C.; Domenici, D.; Erriquez, O.; Fanizzi, G.; Felici, G.; Fiore, S.; Franzini, P.; Gauzzi, P.; Giardina, G.; Giovannella, S.; Gonnella, F.; Graziani, E.; Happacher, F.; Heijkenskjöld, L.; Höistad, B.; Iafolla, L.; Iarocci, E.; Jacewicz, M.; Johansson, T.; Kluge, W.; Kupsc, A.; Lee-Franzini, J.; Loddo, F.; Lukin, P.; Mandaglio, G.; Martemianov, M.; Martini, M.; Mascolo, M.; Messi, R.; Miscetti, S.; Morello, G.; Moricciani, D.; Moskal, P.; Müller, S.; Nguyen, F.; Passeri, A.; Patera, V.; Prado Longhi, I.; Ranieri, A.; Redmer, C. F.; Santangelo, P.; Sarra, I.; Schioppa, M.; Sciascia, B.; Silarski, M.; Taccini, C.; Tortora, L.; Venanzoni, G.; Versaci, R.; Wiślicki, W.; Wolke, M.; Zdebik, J.
2013-03-01
We have measured the ratio σ(e+e-→π+π-γ)/σ(e+e-→μ+μ-γ), with the KLOE detector at DAΦNE for a total integrated luminosity of ˜240 pb. From this ratio we obtain the cross section σ(e+e-→π+π-). From the cross section we determine the pion form factor | and the two-pion contribution to the muon anomaly aμ for 0.592
Precision Control of the Electron Longitudinal Bunch Shape Using an Emittance-Exchange Beam Line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Gwanghui; Cho, Moo -Hyun; Namkung, W.
2017-03-09
Here, we report on the experimental generation of relativistic electron bunches with a tunable longitudinal bunch shape. A longitudinal bunch-shaping (LBS) beam line, consisting of a transverse mask followed by a transverse-to-longitudinal emittance exchange (EEX) beam line, is used to tailor the longitudinal bunch shape (or current profile) of the electron bunch. The mask shapes the bunch’s horizontal profile, and the EEX beam line converts it to a corresponding longitudinal profile. The Argonne wakefield accelerator rf photoinjector delivers electron bunches into a LBS beam line to generate a variety of longitudinal bunch shapes. The quality of the longitudinal bunch shapemore » is limited by various perturbations in the exchange process. We develop a simple method, based on the incident slope of the bunch, to significantly suppress the perturbations.« less
System and Method for Obtaining Simultaneous Levitation and Rotation of a Ferromagnetic Object
NASA Astrophysics Data System (ADS)
Banerjee, Subrata; Sarkar, Mrinal Kanti; Ghosh, Arnab
2017-02-01
In this work a practical demonstration for simultaneous levitation and rotation for a ferromagnetic cylindrical object is presented. A hollow steel cylinder has been arranged to remain suspended stably under I-core electromagnet utilizing dc attraction type levitation principle and then arranged to rotate the levitated object around 1000 rpm speed based on eddy current based energy meter principle. Since the object is to be rotating during levitated condition the device will be frictionless, energy-efficient and robust. This technology may be applied to frictionless energy meter, wind turbine, machine tool applications, precision instruments and many other devices where easy energy-efficient stable rotation will be required. The cascade lead compensation control scheme has been applied for stabilization of unstable levitation system. The proposed device is successfully tested in the laboratory and experimental results have been produced.
NASA Astrophysics Data System (ADS)
Patkar, Rajul S.; Ashwin, Mamta; Rao, V. Ramgopal
2017-12-01
Monitoring of soil nutrients is very important in precision agriculture. In this paper, we have demonstrated a micro electro mechanical system based lab-on-a-chip system for detection of various soil macronutrients which are available in ionic form K+, NO3-, and H2PO4-. These sensors are highly sensitive piezoresistive silicon microcantilevers coated with a polymer matrix containing methyltridodecylammonium nitrate ionophore/ nitrate ionophore VI for nitrate sensing, 18-crown-6 ether for potassium sensing and Tributyltin chloride for phosphate detection. A complete lab-on-a-chip system integrating a highly sensitive current excited Wheatstone's bridge based portable electronic setup along with arrays of microcantilever devices mounted on a printed circuit board with a liquid flow cell for on the site experimentation for soil test has been demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jumpertz, L., E-mail: louise.jumpertz@telecom-paristech.fr; MirSense, 8 avenue de la Vauve, F-91120 Palaiseau; Michel, F.
2016-01-15
Precise knowledge of the linewidth enhancement factor of a semiconductor laser under actual operating conditions is of prime importance since this parameter dictates various phenomena such as linewidth broadening or optical nonlinearities enhancement. The above-threshold linewidth enhancement factor of a mid-infrared quantum cascade laser structure operated at 10{sup ∘}C is determined experimentally using two different methods based on optical feedback. Both Fabry-Perot and distributed feedback quantum cascade lasers based on the same active area design are studied, the former by following the wavelength shift as a function of the feedback strength and the latter by self-mixing interferometry. The results aremore » consistent and unveil a clear pump current dependence of the linewidth enhancement factor, with values ranging from 0.8 to about 3.« less
Novel Soft-Pion Theorem for Long-Range Nuclear Parity Violation.
Feng, Xu; Guo, Feng-Kun; Seng, Chien-Yeah
2018-05-04
The parity-odd effect in the standard model weak neutral current reveals itself in the long-range parity-violating nuclear potential generated by the pion exchanges in the ΔI=1 channel with the parity-odd pion-nucleon coupling constant h_{π}^{1}. Despite decades of experimental and theoretical efforts, the size of this coupling constant is still not well understood. In this Letter, we derive a soft-pion theorem relating h_{π}^{1} and the neutron-proton mass splitting induced by an artificial parity-even counterpart of the ΔI=1 weak Lagrangian and demonstrate that the theorem still holds exact at the next-to-leading order in the chiral perturbation theory. A considerable amount of simplification is expected in the study of h_{π}^{1} by using either lattice or other QCD models following its reduction from a parity-odd proton-neutron-pion matrix element to a simpler spectroscopic quantity. The theorem paves the way to much more precise calculations of h_{π}^{1}, and thus a quantitative test of the strangeness-conserving neutral current interaction of the standard model is foreseen.