High precision triangular waveform generator
Mueller, Theodore R.
1983-01-01
An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.
High-precision triangular-waveform generator
Mueller, T.R.
1981-11-14
An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.
PRECISION INTEGRATOR FOR MINUTE ELECTRIC CURRENTS
Hemmendinger, A.; Helmer, R.J.
1961-10-24
An integrator is described for measuring the value of integrated minute electrical currents. The device consists of a source capacitor connected in series with the source of such electrical currents, a second capacitor of accurately known capacitance and a source of accurately known and constant potential, means responsive to the potentials developed across the source capacitor for reversibly connecting the second capacitor in series with the source of known potential and with the source capacitor and at a rate proportional to the potential across the source capacitor to maintain the magnitude of the potential across the source capacitor at approximately zero. (AEC)
Pollock, George G.
1997-01-01
Two power supplies are combined to control a furnace. A main power supply heats the furnace in the traditional manner, while the power from the auxiliary supply is introduced as a current flow through charged particles existing due to ionized gas or thermionic emission. The main power supply provides the bulk heating power and the auxiliary supply provides a precise and fast power source such that the precision of the total power delivered to the furnace is improved.
Zhou, Li; Wang, Kui; Li, Qifu; Nice, Edouard C; Zhang, Haiyuan; Huang, Canhua
2016-01-01
Cancer is a common disease that is a leading cause of death worldwide. Currently, early detection and novel therapeutic strategies are urgently needed for more effective management of cancer. Importantly, protein profiling using clinical proteomic strategies, with spectacular sensitivity and precision, offer excellent promise for the identification of potential biomarkers that would direct the development of targeted therapeutic anticancer drugs for precision medicine. In particular, clinical sample sources, including tumor tissues and body fluids (blood, feces, urine and saliva), have been widely investigated using modern high-throughput mass spectrometry-based proteomic approaches combined with bioinformatic analysis, to pursue the possibilities of precision medicine for targeted cancer therapy. Discussed in this review are the current advantages and limitations of clinical proteomics, the available strategies of clinical proteomics for the management of precision medicine, as well as the challenges and future perspectives of clinical proteomics-driven precision medicine for targeted cancer therapy.
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Pollock, G.G.
1997-01-28
Two power supplies are combined to control a furnace. A main power supply heats the furnace in the traditional manner, while the power from the auxiliary supply is introduced as a current flow through charged particles existing due to ionized gas or thermionic emission. The main power supply provides the bulk heating power and the auxiliary supply provides a precise and fast power source such that the precision of the total power delivered to the furnace is improved. 5 figs.
Current Source Based on H-Bridge Inverter with Output LCL Filter
NASA Astrophysics Data System (ADS)
Blahnik, Vojtech; Talla, Jakub; Peroutka, Zdenek
2015-09-01
The paper deals with a control of current source with an LCL output filter. The controlled current source is realized as a single-phase inverter and output LCL filter provides low ripple of output current. However, systems incorporating LCL filters require more complex control strategies and there are several interesting approaches to the control of this type of converter. This paper presents the inverter control algorithm, which combines model based control with a direct current control based on resonant controllers and single-phase vector control. The primary goal is to reduce the current ripple and distortion under required limits and provides fast and precise control of output current. The proposed control technique is verified by measurements on the laboratory model.
Present situation and trend of precision guidance technology and its intelligence
NASA Astrophysics Data System (ADS)
Shang, Zhengguo; Liu, Tiandong
2017-11-01
This paper first introduces the basic concepts of precision guidance technology and artificial intelligence technology. Then gives a brief introduction of intelligent precision guidance technology, and with the help of development of intelligent weapon based on deep learning project in foreign: LRASM missile project, TRACE project, and BLADE project, this paper gives an overview of the current foreign precision guidance technology. Finally, the future development trend of intelligent precision guidance technology is summarized, mainly concentrated in the multi objectives, intelligent classification, weak target detection and recognition, intelligent between complex environment intelligent jamming and multi-source, multi missile cooperative fighting and other aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, F.; Hartemann, F. V.; Anderson, S. G.
Tunable, high precision gamma-ray sources are under development to enable nuclear photonics, an emerging field of research. This paper focuses on the technological and theoretical challenges related to precision Compton scattering gamma-ray sources. In this scheme, incident laser photons are scattered and Doppler upshifted by a high brightness electron beam to generate tunable and highly collimated gamma-ray pulses. The electron and laser beam parameters can be optimized to achieve the spectral brightness and narrow bandwidth required by nuclear photonics applications. A description of the design of the next generation precision gamma-ray source currently under construction at Lawrence Livermore National Laboratorymore » is presented, along with the underlying motivations. Within this context, high-gradient X-band technology, used in conjunction with fiber-based photocathode drive laser and diode pumped solid-state interaction laser technologies, will be shown to offer optimal performance for high gamma-ray spectral flux, narrow bandwidth applications.« less
Estimating Uncertainty in Annual Forest Inventory Estimates
Ronald E. McRoberts; Veronica C. Lessard
1999-01-01
The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...
A high-precision voltage source for EIT
Saulnier, Gary J; Liu, Ning; Ross, Alexander S
2006-01-01
Electrical impedance tomography (EIT) utilizes electrodes placed on the surface of a body to determine the complex conductivity distribution within the body. EIT can be performed by applying currents through the electrodes and measuring the electrode voltages or by applying electrode voltages and measuring the currents. Techniques have also been developed for applying the desired currents using voltage sources. This paper describes a voltage source for use in applied-voltage EIT that includes the capability of measuring both the applied voltage and applied current. A calibration circuit and calibration algorithm are described which enables all voltage sources in an EIT system to be calibrated to a common standard. The calibration minimizes the impact of stray shunt impedance, passive component variability and active component non-ideality. Simulation data obtained using PSpice are used to demonstrate the effectiveness of the circuits and calibration algorithm. PMID:16636413
Superallowed Fermi β-Decay Studies with SCEPTAR and the 8π Gamma-Ray Spectrometer
NASA Astrophysics Data System (ADS)
Koopmans, K. A.
2005-04-01
The 8π Gamma-Ray Spectrometer, operating at TRIUMF in Vancouver Canada, is a high-precision instrument for detecting the decay radiations from exotic nuclei. In 2003, a new beta-scintillating array called SCEPTAR was installed within the 8π Spectrometer. With these two systems, precise measurements of half-lives and branching ratios can be made, specifically on certain nuclei which exhibit Superallowed Fermi 0+ → 0+ β-decay. These data can be used to determine the value of δC, an isospin symmetry-breaking (Coulomb) correction factor to good precision. As this correction factor is currently one of the leading sources of error in the unitarity test of the CKM matrix, a precise determination of its value could help to eliminate any possible "trivial" explanation of the seeming departure of current experimental data from Standard Model predictions.
NASA Technical Reports Server (NTRS)
Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine; Weiss, Marc A.
1990-01-01
Over intercontinental distances, the accuracy of The Global Positioning System (GPS) time transfers ranges from 10 to 20 ns. The principal error sources are the broadcast ionospheric model, the broadcast ephemerides and the local antenna coordinates. For the first time, the three major error sources for GPS time transfer can be reduced simultaneously for a particular time link. Ionospheric measurement systems of the National Institute of Standards and Technology (NIST) type are now operating on a regular basis at the National Institute of Standards and Technology in Boulder and at the Paris Observatory in Paris. Broadcast ephemerides are currently recorded for time-transfer tracks between these sites, this being necessary for using precise ephemerides. At last, corrected local GPS antenna coordinates are now introduced in GPS receivers at both sites. Shown here is the improvement in precision for this long-distance time comparison resulting from the reduction of these three error sources.
Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P
2017-08-01
Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (<0.2mm) were observed post-autoclave sterilization of the 3D-printed surgical guides. In the three surgical cases, the average precision of fibula free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Spectrally and Radiometrically Stable, Wideband, Onboard Calibration Source
NASA Technical Reports Server (NTRS)
Coles, James B.; Richardson, Brandon S.; Eastwood, Michael L.; Sarture, Charles M.; Quetin, Gregory R.; Porter, Michael D.; Green, Robert O.; Nolte, Scott H.; Hernandez, Marco A.; Knoll, Linley A.
2013-01-01
The Onboard Calibration (OBC) source incorporates a medical/scientific-grade halogen source with a precisely designed fiber coupling system, and a fiber-based intensity-monitoring feedback loop that results in radiometric and spectral stabilities to within less than 0.3 percent over a 15-hour period. The airborne imaging spectrometer systems developed at the Jet Propulsion Laboratory incorporate OBC sources to provide auxiliary in-use system calibration data. The use of the OBC source will provide a significant increase in the quantitative accuracy, reliability, and resulting utility of the spectral data collected from current and future imaging spectrometer instruments.
Onset of space charge effects in liquid argon ionization chambers
NASA Astrophysics Data System (ADS)
Toggerson, B.; Newcomer, A.; Rutherfoord, J.; Walker, R. B.
2009-09-01
Using a thin-gap liquid argon ionization chamber and Strontium-90 beta sources we have measured ionization currents over a wide range of gap potentials. These precision "HV plateau curves" advance the understanding of liquid argon sampling calorimeter signals, particularly at high ionization rates. The order of magnitude differences in the activities of the beta sources allow us to estimate where the ionization chamber is driven into the space-charge dominated regime.
An overview of LIGO and Virgo -- status and plans
NASA Astrophysics Data System (ADS)
Miller, John
2014-06-01
Interferometric gravitational-wave detectors, the most sensitive position meters ever operated, aim to detect the motion of massive bodies throughout the universe by pushing precision measurement to the standard quantum limit and beyond. A global network of these detectors is currently under construction, promising unprecedented sensitivity and the ability to determine the sky position of any detected signals. I will describe the current status and expected performance of this network with a focus on limiting noise sources and the techniques currently being developed to combat them.
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry V.; Yu, Nam; Thompson, Robert J.
2012-01-01
The most accurate astronomical data is available from space-based observations that are not impeded by the Earth's atmosphere. Such measurements may require spectral samples taken as long as decades apart, with the 1 cm/s velocity precision integrated over a broad wavelength range. This raises the requirements specifically for instruments used in astrophysics research missions -- their stringent wavelength resolution and accuracy must be maintained over years and possibly decades. Therefore, a stable and broadband optical calibration technique compatible with spaceflights becomes essential. The space-based spectroscopic instruments need to be calibrated in situ, which puts forth specific requirements to the calibration sources, mainly concerned with their mass, power consumption, and reliability. A high-precision, high-resolution reference wavelength comb source for astronomical and astrophysics spectroscopic observations has been developed that is deployable in space. The optical comb will be used for wavelength calibrations of spectrographs and will enable Doppler measurements to better than 10 cm/s precision, one hundred times better than the current state-of-the- art.
NASA Astrophysics Data System (ADS)
Xu, Wei; Li, Jing-Yi; Huang, Sen-Lin; Z. Wu, W.; Hao, H.; P., Wang; K. Wu, Y.
2014-10-01
The Duke storage ring is a dedicated driver for the storage ring based oscillator free-electron lasers (FELs), and the High Intensity Gamma-ray Source (HIGS). It is operated with a beam current ranging from about 1 mA to 100 mA per bunch for various operations and accelerator physics studies. High performance operations of the FEL and γ-ray source require a stable electron beam orbit, which has been realized by the global orbit feedback system. As a critical part of the orbit feedback system, the electron beam position monitors (BPMs) are required to be able to precisely measure the electron beam orbit in a wide range of the single-bunch current. However, the high peak voltage of the BPM pickups associated with high single-bunch current degrades the performance of the BPM electronics, and can potentially damage the BPM electronics. A signal conditioning method using low pass filters is developed to reduce the peak voltage to protect the BPM electronics, and to make the BPMs capable of working with a wide range of single-bunch current. Simulations and electron beam based tests are performed. The results show that the Duke storage ring BPM system is capable of providing precise orbit measurements to ensure highly stable FEL and HIGS operations.
USDA-ARS?s Scientific Manuscript database
In the beef industry, product contamination by Salmonella enterica is a serious public health concern, which may result in human infection and cause significant financial loss due to product recalls. Currently, the precise mechanism and pathogen source responsible for Salmonella contamination in com...
Core Journal Lists: Classic Tool, New Relevance
ERIC Educational Resources Information Center
Paynter, Robin A.; Jackson, Rose M.; Mullen, Laura Bowering
2010-01-01
Reviews the historical context of core journal lists, current uses in collection assessment, and existing methodologies for creating lists. Outlines two next generation core list projects developing new methodologies and integrating novel information/data sources to improve precision: a national-level core psychology list and the other a local…
NASA Astrophysics Data System (ADS)
Rakotondravohitra, Laza
2013-04-01
Current and future neutrino oscillation experiments depend on precise knowledge of neutrino-nucleus cross-sections. Minerva is a neutrino scattering experiment at Fermilab. Minerva was designed to make precision measurements of low energy neutrino and antineutrino cross sections on a variety of different materials (plastic scintillator, C, Fe, Pb, He and H2O). In Order to make these measurements, it is crucial that the detector is carefully calibrated.This talk will describe how MINERvA uses muons from upstream neutrino interactions as a calibration source to convert electronics output to absolute energy deposition.
Design of current source for multi-frequency simultaneous electrical impedance tomography
NASA Astrophysics Data System (ADS)
Han, Bing; Xu, Yanbin; Dong, Feng
2017-09-01
Multi-frequency electrical impedance tomography has been evolving from the frequency-sweep approach to the multi-frequency simultaneous measurement technique which can reduce measuring time and will be increasingly attractive for time-varying biological applications. The accuracy and stability of the current source are the key factors determining the quality of the image reconstruction. This article presents a field programmable gate array-based current source for a multi-frequency simultaneous electrical impedance tomography system. A novel current source circuit was realized by combining the classic current mirror based on the feedback amplifier AD844 with a differential topology. The optimal phase offsets of harmonic sinusoids were obtained through the crest factor analysis. The output characteristics of this current source were evaluated by simulation and actual measurement. The results include the following: (1) the output impedance was compared with one of the Howland pump circuit in simulation, showing comparable performance at low frequencies. However, the proposed current source makes lower demands for resistor tolerance but performs even better at high frequencies. (2) The output impedance in actual measurement below 200 kHz is above 1.3 MΩ and can reach 250 KΩ up to 1 MHz. (3) An experiment based on a biological RC model has been implemented. The mean error for the demodulated impedance amplitude and phase are 0.192% and 0.139°, respectively. Therefore, the proposed current source is wideband, biocompatible, and high precision, which demonstrates great potential to work as a sub-system in the multi-frequency electrical impedance tomography system.
NASA Astrophysics Data System (ADS)
Kato, Y.; Takenaka, T.; Yano, K.; Kiriyama, R.; Kurisu, Y.; Nozaki, D.; Muramatsu, M.; Kitagawa, A.; Uchida, T.; Yoshida, Y.; Sato, F.; Iida, T.
2012-11-01
Multiply charged ions to be used prospectively are produced from solid pure material in an electron cyclotron resonance ion source (ECRIS). Recently a pure iron source is also required for the production of caged iron ions in the fullerene in order to control cells in vivo in bio-nano science and technology. We adopt directly heating iron rod by induction heating (IH) because it has non-contact with insulated materials which are impurity gas sources. We choose molybdenum wire for the IH coils because it doesn't need water cooling. To improve power efficiency and temperature control, we propose to the new circuit without previously using the serial and parallel dummy coils (SPD) for matching and safety. We made the circuit consisted of inductively coupled coils which are thin-flat and helix shape, and which insulates the IH power source from the evaporator. This coupling coils circuit, i.e. insulated induction heating coil transformer (IHCT), can be move mechanically. The secondary current can be adjusted precisely and continuously. Heating efficiency by using the IHCT is much higher than those of previous experiments by using the SPD, because leakage flux is decreased and matching is improved simultaneously. We are able to adjust the temperature in heating the vapor source around melting point. And then the vapor pressure can be controlled precisely by using the IHCT. We can control ±10K around 1500°C by this method, and also recognize to controlling iron vapor flux experimentally in the extreme low pressures. Now we come into next stage of developing induction heating vapor source for materials with furthermore high temperature melting points above 2000K with the IHCT, and then apply it in our ECRIS.
Chen, Yixi; Guzauskas, Gregory F; Gu, Chengming; Wang, Bruce C M; Furnback, Wesley E; Xie, Guotong; Dong, Peng; Garrison, Louis P
2016-11-02
The "big data" era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR) practices will need to evolve to accommodate individual patient-level HEOR analyses. We propose the concept of "precision HEOR", which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient.
Chen, Yixi; Guzauskas, Gregory F.; Gu, Chengming; Wang, Bruce C. M.; Furnback, Wesley E.; Xie, Guotong; Dong, Peng; Garrison, Louis P.
2016-01-01
The “big data” era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR) practices will need to evolve to accommodate individual patient–level HEOR analyses. We propose the concept of “precision HEOR”, which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient. PMID:27827859
NASA Astrophysics Data System (ADS)
Rennick, Chris; Bausi, Francesco; Arnold, Tim
2017-04-01
On the global scale methane (CH4) concentrations have more than doubled over the last 150 years, and the contribution to the enhanced greenhouse effect is almost half of that due to the increase in carbon dioxide (CO2) over the same period. Microbial, fossil fuel, biomass burning and landfill are dominant methane sources with differing annual variabilities; however, in the UK for example, mixing ratio measurements from a tall tower network and regional scale inversion modelling have thus far been unable to disaggregate emissions from specific source categories with any significant certainty. Measurement of the methane isotopologue ratios will provide the additional information needed for more robust sector attribution, which will be important for directing policy action Here we explore the potential for isotope ratio measurements to improve the interpretation of atmospheric mixing ratios beyond calculation of total UK emissions, and describe current analytical work at the National Physical Laboratory that will realise deployment of such measurements. We simulate isotopic variations at the four UK greenhouse gas tall tower network sites to understand where deployment of the first isotope analyser would be best situated. We calculate the levels of precision needed in both δ-13C and δ-D in order to detect particular scenarios of emissions. Spectroscopic measurement in the infrared by quantum cascade laser (QCL) absorption is a well-established technique to quantify the mixing ratios of trace species in atmospheric samples and, as has been demonstrated in 2016, if coupled to a suitable preconcentrator then high-precision measurements are possible. The current preconcentration system under development at NPL is designed to make the highest precision measurements yet of the standard isotope ratios via a new large-volume cryogenic trap design and controlled thermal desorption into a QCL spectrometer. Finally we explore the potential for the measurement of clumped isotopes at high frequency and precision. The doubly-substituted 13CH3D isotopologue is a tracer for methane formed at geological temperatures, and will provide additional information for identification of these sources.
Broadband Lidar Technique for Precision CO2 Measurement
NASA Technical Reports Server (NTRS)
Heaps, William S.
2008-01-01
Presented are preliminary experimental results, sensitivity measurements and discuss our new CO2 lidar system under development. The system is employing an erbium-doped fiber amplifier (EDFA), superluminescent light emitting diode (SLED) as a source and our previously developed Fabry-Perot interferometer subsystem as a detector part. Global measurement of carbon dioxide column with the aim of discovering and quantifying unknown sources and sinks has been a high priority for the last decade. The goal of Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission is to significantly enhance the understanding of the role of CO2 in the global carbon cycle. The National Academy of Sciences recommended in its decadal survey that NASA put in orbit a CO2 lidar to satisfy this long standing need. Existing passive sensors suffer from two shortcomings. Their measurement precision can be compromised by the path length uncertainties arising from scattering within the atmosphere. Also passive sensors using sunlight cannot observe the column at night. Both of these difficulties can be ameliorated by lidar techniques. Lidar systems present their own set of problems however. Temperature changes in the atmosphere alter the cross section for individual CO2 absorption features while the different atmospheric pressures encountered passing through the atmosphere broaden the absorption lines. Currently proposed lidars require multiple lasers operating at multiple wavelengths simultaneously in order to untangle these effects. The current goal is to develop an ultra precise, inexpensive new lidar system for precise column measurements of CO2 changes in the lower atmosphere that uses a Fabry-Perot interferometer based system as the detector portion of the instrument and replaces the narrow band laser commonly used in lidars with the newly available high power SLED as the source. This approach reduces the number of individual lasers used in the system from three or more to one - considerably reducing the risk of failure. It also tremendously reduces the requirement for wavelength stability in the source putting this responsibility instead on the Fabry-Perot subsystem.
Note: Precise radial distribution of charged particles in a magnetic guiding field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backe, H., E-mail: backe@kph.uni-mainz.de
2015-07-15
Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less
ERIC Educational Resources Information Center
Edge, Brittani; Velandia, Margarita; Lambert, Dayton M.; Roberts, Roland K.; Larson, James A.; English, Burton C.; Boyer, Christopher; Rejesus, Roderick; Mishra, Ashok
2017-01-01
Using information from precision farmer surveys conducted in the southern United States in 2005 and 2013, we evaluated changes in the use of precision farming information sources among cotton producers. Although Extension remains an important source for producers interested in precision farming information, the percentage of cotton producers using…
Li, Qing-Bo; Xu, Yu-Po; Zhang, Chao-Hang; Zhang, Guang-Jun; Wu, Jin-Guang
2009-10-01
A portable nondestructive measuring instrument for plant chlorophyll was developed, which can perform real-time, quick and nondestructive measurement of chlorophyll. The instrument is mainly composed of four parts, including leaves clamp, driving circuit of light source, photoelectric detection and signal conditioning circuit and micro-control system. A new scheme of light source driving was proposed, which can not only achieve constant current, but also control the current by digital signal. The driving current can be changed depending on different light source and measurement situation by actual operation, which resolves the matching problem of output intensity of light source and input range of photoelectric detector. In addition, an integrative leaves clamp was designed, which simplified the optical structure, enhanced the stability of apparatus, decreased the loss of incident light and improved the signal-to-noise ratio and precision. The photoelectric detection and signal conditioning circuit achieve the conversion between optical signal and electrical signal, and make the electrical signal meet the requirement of AD conversion, and the photo detector is S1133-14 of Hamamatsu Company, with a high detection precision. The micro-control system mainly achieves control function, dealing with data, data storage and so on. As the most important component, microprocessor MSP430F149 of TI Company has many advantages, such as high processing speed, low power, high stability and so on. And it has an in-built 12 bit AD converter, so the data-acquisition circuit is simpler. MSP430F149 is suitable for portable instrument. In the calibration experiment of the instrument, the standard value was measured by chlorophyll meter SPAD-502, multiple linear calibration models were built, and the instrument performance was evaluated. The correlation coefficient between chlorophyll prediction value and standard value is 0.97, and the root mean square error of prediction is about 1.3 SPAD. In the evaluation experiment of the instrument repeatability, the root mean square error is 0.1 SPAD. Results of the calibration experiment show that the instrument has high measuring precision and high stability.
Proceedings of the 8th Precise Time and Time Interval (PTTI) Applications and Planning Meeting
NASA Technical Reports Server (NTRS)
1977-01-01
The Proceedings contain the papers presented at the Eight Annual Precise Time and Tme Interval PTTI Applications and Planning Meeting. The edited record of the discussions following the papers and the panel discussions are also included. This meeting provided a forum for the exchange of information on precise time and frequency technology among members of the scientific community and persons with program applications. The 282 registered attendees came from various U.S. Government agencies, private industry, universities and a number of foreign countries were represented. In this meeting, papers were presented that emphasized: (1) definitions and international regulations of precise time sources and users, (2) the scientific foundations of Hydrogen Maser standards, the current developments in this field and the application experience, and (3) how to measure the stability performance properties of precise standards. As in the previous meetings, update and new papers were presented on system applications with past, present and future requirements identified.
The Magsat precision vector magnetometer
NASA Technical Reports Server (NTRS)
Acuna, M. H.
1980-01-01
This paper examines the Magsat precision vector magnetometer which is designed to measure projections of the ambient field in three orthogonal directions. The system contains a highly stable and linear triaxial fluxgate magnetometer with a dynamic range of + or - 2000 nT (1 nT = 10 to the -9 weber per sq m). The magnetometer electronics, analog-to-digital converter, and digitally controlled current sources are implemented with redundant designs to avoid a loss of data in case of failures. Measurements are carried out with an accuracy of + or - 1 part in 64,000 in magnitude and 5 arcsec in orientation (1 arcsec = 0.00028 deg).
Störmer, M; Gabrisch, H; Horstmann, C; Heidorn, U; Hertlein, F; Wiesmann, J; Siewert, F; Rack, A
2016-05-01
X-ray mirrors are needed for beam shaping and monochromatization at advanced research light sources, for instance, free-electron lasers and synchrotron sources. Such mirrors consist of a substrate and a coating. The shape accuracy of the substrate and the layer precision of the coating are the crucial parameters that determine the beam properties required for various applications. In principal, the selection of the layer materials determines the mirror reflectivity. A single layer mirror offers high reflectivity in the range of total external reflection, whereas the reflectivity is reduced considerably above the critical angle. A periodic multilayer can enhance the reflectivity at higher angles due to Bragg reflection. Here, the selection of a suitable combination of layer materials is essential to achieve a high flux at distinct photon energies, which is often required for applications such as microtomography, diffraction, or protein crystallography. This contribution presents the current development of a Ru/C multilayer mirror prepared by magnetron sputtering with a sputtering facility that was designed in-house at the Helmholtz-Zentrum Geesthacht. The deposition conditions were optimized in order to achieve ultra-high precision and high flux in future mirrors. Input for the improved deposition parameters came from investigations by transmission electron microscopy. The X-ray optical properties were investigated by means of X-ray reflectometry using Cu- and Mo-radiation. The change of the multilayer d-spacing over the mirror dimensions and the variation of the Bragg angles were determined. The results demonstrate the ability to precisely control the variation in thickness over the whole mirror length of 500 mm thus achieving picometer-precision in the meter-range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Störmer, M., E-mail: michael.stoermer@hzg.de; Gabrisch, H.; Horstmann, C.
2016-05-15
X-ray mirrors are needed for beam shaping and monochromatization at advanced research light sources, for instance, free-electron lasers and synchrotron sources. Such mirrors consist of a substrate and a coating. The shape accuracy of the substrate and the layer precision of the coating are the crucial parameters that determine the beam properties required for various applications. In principal, the selection of the layer materials determines the mirror reflectivity. A single layer mirror offers high reflectivity in the range of total external reflection, whereas the reflectivity is reduced considerably above the critical angle. A periodic multilayer can enhance the reflectivity atmore » higher angles due to Bragg reflection. Here, the selection of a suitable combination of layer materials is essential to achieve a high flux at distinct photon energies, which is often required for applications such as microtomography, diffraction, or protein crystallography. This contribution presents the current development of a Ru/C multilayer mirror prepared by magnetron sputtering with a sputtering facility that was designed in-house at the Helmholtz-Zentrum Geesthacht. The deposition conditions were optimized in order to achieve ultra-high precision and high flux in future mirrors. Input for the improved deposition parameters came from investigations by transmission electron microscopy. The X-ray optical properties were investigated by means of X-ray reflectometry using Cu- and Mo-radiation. The change of the multilayer d-spacing over the mirror dimensions and the variation of the Bragg angles were determined. The results demonstrate the ability to precisely control the variation in thickness over the whole mirror length of 500 mm thus achieving picometer-precision in the meter-range.« less
A High Precision $3.50 Open Source 3D Printed Rain Gauge Calibrator
NASA Astrophysics Data System (ADS)
Lopez Alcala, J. M.; Udell, C.; Selker, J. S.
2017-12-01
Currently available rain gauge calibrators tend to be designed for specific rain gauges, are expensive, employ low-precision water reservoirs, and do not offer the flexibility needed to test the ever more popular small-aperture rain gauges. The objective of this project was to develop and validate a freely downloadable, open-source, 3D printed rain gauge calibrator that can be adjusted for a wide range of gauges. The proposed calibrator provides for applying low, medium, and high intensity flow, and allows the user to modify the design to conform to unique system specifications based on parametric design, which may be modified and printed using CAD software. To overcome the fact that different 3D printers yield different print qualities, we devised a simple post-printing step that controlled critical dimensions to assure robust performance. Specifically, the three orifices of the calibrator are drilled to reach the three target flow rates. Laboratory tests showed that flow rates were consistent between prints, and between trials of each part, while the total applied water was precisely controlled by the use of a volumetric flask as the reservoir.
DISENTANGLING CONFUSED STARS AT THE GALACTIC CENTER WITH LONG-BASELINE INFRARED INTERFEROMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, Jordan M.; Eisner, J. A.; Monnier, J. D.
2012-08-01
We present simulations of Keck Interferometer ASTRA and VLTI GRAVITY observations of mock star fields in orbit within {approx}50 mas of Sgr A*. Dual-field phase referencing techniques, as implemented on ASTRA and planned for GRAVITY, will provide the sensitivity to observe Sgr A* with long-baseline infrared interferometers. Our results show an improvement in the confusion noise limit over current astrometric surveys, opening a window to study stellar sources in the region. Since the Keck Interferometer has only a single baseline, the improvement in the confusion limit depends on source position angles. The GRAVITY instrument will yield a more compact andmore » symmetric point-spread function, providing an improvement in confusion noise which will not depend as strongly on position angle. Our Keck results show the ability to characterize the star field as containing zero, few, or many bright stellar sources. We are also able to detect and track a source down to m{sub K} {approx} 18 through the least confused regions of our field of view at a precision of {approx}200 {mu}as along the baseline direction. This level of precision improves with source brightness. Our GRAVITY results show the potential to detect and track multiple sources in the field. GRAVITY will perform {approx}10 {mu}as astrometry on an m{sub K} = 16.3 source and {approx}200 {mu}as astrometry on an m{sub K} = 18.8 source in 6 hr of monitoring a crowded field. Monitoring the orbits of several stars will provide the ability to distinguish between multiple post-Newtonian orbital effects, including those due to an extended mass distribution around Sgr A* and to low-order general relativistic effects. ASTRA and GRAVITY both have the potential to detect and monitor sources very close to Sgr A*. Early characterizations of the field by ASTRA, including the possibility of a precise source detection, could provide valuable information for future GRAVITY implementation and observation.« less
Overview of Mono-Energetic Gamma-Ray Sources and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartemann, Fred; /LLNL, Livermore; Albert, Felicie
2012-06-25
Recent progress in accelerator physics and laser technology have enabled the development of a new class of tunable gamma-ray light sources based on Compton scattering between a high-brightness, relativistic electron beam and a high intensity laser pulse produced via chirped-pulse amplification (CPA). A precision, tunable Mono-Energetic Gamma-ray (MEGa-ray) source driven by a compact, high-gradient X-band linac is currently under development and construction at LLNL. High-brightness, relativistic electron bunches produced by an X-band linac designed in collaboration with SLAC NAL will interact with a Joule-class, 10 ps, diode-pumped CPA laser pulse to generate tunable {gamma}-rays in the 0.5-2.5 MeV photon energymore » range via Compton scattering. This MEGaray source will be used to excite nuclear resonance fluorescence in various isotopes. Applications include homeland security, stockpile science and surveillance, nuclear fuel assay, and waste imaging and assay. The source design, key parameters, and current status are presented, along with important applications, including nuclear resonance fluorescence.« less
Preparation of alpha sources using magnetohydrodynamic electrodeposition for radionuclide metrology.
Panta, Yogendra M; Farmer, Dennis E; Johnson, Paula; Cheney, Marcos A; Qian, Shizhi
2010-02-01
Expanded use of nuclear fuel as an energy resource and terrorist threats to public safety clearly require the development of new state-of-the-art technologies and improvement of safety measures to minimize the exposure of people to radiation and the accidental release of radiation into the environment. The precision in radionuclide metrology is currently limited by the source quality rather than the detector performance. Electrodeposition is a commonly used technique to prepare massless radioactive sources. Unfortunately, the radioactive sources prepared by the conventional electrodeposition method produce poor resolution in alpha spectrometric measurements. Preparing radioactive sources with better resolution and higher yield in the alpha spectrometric range by integrating magnetohydrodynamic convection with the conventional electrodeposition technique was proposed and tested by preparing mixed alpha sources containing uranium isotopes ((238)U, (234)U), plutonium ((239)Pu), and americium ((241)Am) for alpha spectrometric determination. The effects of various parameters such as magnetic flux density, deposition current and time, and pH of the sample solution on the formed massless radioactive sources were also experimentally investigated. Copyright 2009 Elsevier Inc. All rights reserved.
High-precision half-life measurements of the T =1 /2 mirror β decays 17F and 33Cl
NASA Astrophysics Data System (ADS)
Grinyer, J.; Grinyer, G. F.; Babo, M.; Bouzomita, H.; Chauveau, P.; Delahaye, P.; Dubois, M.; Frigot, R.; Jardin, P.; Leboucher, C.; Maunoury, L.; Seiffert, C.; Thomas, J. C.; Traykov, E.
2015-10-01
Background: Measurements of the f t values for T =1 /2 mirror β+ decays offer a method to test the conserved vector current hypothesis and to determine Vud, the up-down matrix element of the Cabibbo-Kobayashi-Maskawa matrix. In most mirror decays used for these tests, uncertainties in the f t values are dominated by the uncertainties in the half-lives. Purpose: Two precision half-life measurements were performed for the T =1 /2 β+ emitters, 17F and 33Cl, in order to eliminate the half-life as the leading source of uncertainty in their f t values. Method: Half-lives of 17F and 33Cl were determined using β counting of implanted radioactive ion beam samples on a moving tape transport system at the Système de Production d'Ions Radioactifs Accélérés en Ligne low-energy identification station at the Grand Accélérateur National d'Ions Lourds. Results: The 17F half-life result, 64.347 (35) s, precise to ±0.05 % , is a factor of 5 times more precise than the previous world average. The half-life of 33Cl was determined to be 2.5038 (22) s. The current precision of ±0.09 % is nearly 2 times more precise compared to the previous world average. Conclusions: The precision achieved during the present measurements implies that the half-life no longer dominates the uncertainty of the f t values for both T =1 /2 mirror decays 17F and 33Cl.
A Hot-Wire Method Based Thermal Conductivity Measurement Apparatus for Teaching Purposes
ERIC Educational Resources Information Center
Alvarado, S.; Marin, E.; Juarez, A. G.; Calderon, A.; Ivanov, R.
2012-01-01
The implementation of an automated system based on the hot-wire technique is described for the measurement of the thermal conductivity of liquids using equipment easily available in modern physics laboratories at high schools and universities (basically a precision current source and a voltage meter, a data acquisition card, a personal computer…
Modeling and analysis of sub-surface leakage current in nano-MOSFET under cutoff regime
NASA Astrophysics Data System (ADS)
Swami, Yashu; Rai, Sanjeev
2017-02-01
The high leakage current in nano-meter regimes is becoming a significant portion of power dissipation in nano-MOSFET circuits as threshold voltage, channel length, and gate oxide thickness are scaled down to nano-meter range. Precise leakage current valuation and meticulous modeling of the same at nano-meter technology scale is an increasingly a critical work in designing the low power nano-MOSFET circuits. We present a specific compact model for sub-threshold regime leakage current in bulk driven nano-MOSFETs. The proposed logical model is instigated and executed into the latest updated PTM bulk nano-MOSFET model and is found to be in decent accord with technology-CAD simulation data. This paper also reviews various transistor intrinsic leakage mechanisms for nano-MOSFET exclusively in weak inversion, like drain-induced barricade lowering (DIBL), gate-induced drain leakage (GIDL), gate oxide tunneling (GOT) leakage etc. The root cause of the sub-surface leakage current is mainly due to the nano-scale short channel length causing source-drain coupling even in sub-threshold domain. Consequences leading to carriers triumphing the barricade between the source and drain. The enhanced model effectively considers the following parameter dependence in the account for better-quality value-added results like drain-to-source bias (VDS), gate-to-source bias (VGS), channel length (LG), source/drain junction depth (Xj), bulk doping concentration (NBULK), and operating temperature (Top).
Near-Continuous Isotopic Characterization of Soil N2O Fluxes from Maize Production
NASA Astrophysics Data System (ADS)
Anex, R. P.; Francis Clar, J.
2015-12-01
Isotopomer ratios of N2O and especially intramolecular 15N site preference (SP) have been proposed as indicators of the sources of N2O and for providing insight into the contributions of different microbial processes. Current knowledge, however, is mainly based on pure culture studies and laboratory flask studies using mass spectrometric analysis. Recent development of laser spectroscopic methods has made possible high-precision, in situ measurements. We present results from a maize production field in Columbia County, Wisconsin, USA. Data were collected from the fertilized maize phase of a maize-soybean rotation. N2O mole fractions and isotopic composition were determined using an automatic gas flux measurement system comprising a set of custom-designed automatic chambers, circulating gas paths and an OA-ICOS N2O Isotope Analyzer (Los Gatos Research, Inc., Model 914-0027). The instrument system allows for up to 15 user programmable soil gas chambers. Wide dynamic range and parts-per-billion precision of OA-ICOS laser absorption instrument allows for extremely rapid estimation of N2O fluxes. Current operational settings provide measurements of N2O and its isotopes every 20 seconds with a precision of 0.1 ± 0.050 PPB. Comparison of measurements from four chambers (two between row and two in-row) show very different aggregate N2O flux, but SP values suggest similar sources from nitrifier denitrification and incomplete bacterial denitrification. SP values reported are being measured throughout the current growing season. To date, the majority of values are consistent with an origin from bacterial denitrification and coincide with periods of high water filled pore space.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2010-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2009-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Implementation of Design Changes Towards a More Reliable, Hands-off Magnetron Ion Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sosa, A.; Bollinger, D. S.; Karns, P. R.
As the main H- ion source for the accelerator complex, magnetron ion sources have been used at Fermilab since the 1970’s. At the offline test stand, new R&D is carried out to develop and upgrade the present magnetron-type sources of H- ions of up to 80 mA and 35 keV beam energy in the context of the Proton Improvement Plan. The aim of this plan is to provide high-power proton beams for the experiments at FNAL. In order to reduce the amount of tuning and monitoring of these ion sources, a new electronic system consisting of a current-regulated arc dischargemore » modulator allow the ion source to run at a constant arc current for improved beam output and operation. A solenoid-type gas valve feeds H2 gas into the source precisely and independently of ambient temperature. This summary will cover several studies and design changes that have been tested and will eventually be implemented on the operational magnetron sources at Fermilab. Innovative results for this type of ion source include cathode geometries, solenoid gas valves, current controlled arc pulser, cesium boiler redesign, gas mixtures of hydrogen and nitrogen, and duty factor reduction, with the aim to improve source lifetime, stability, and reducing the amount of tuning needed. In this summary, I will highlight the advances made in ion sources at Fermilab and will outline the directions of the continuing R&D effort.« less
NASA Astrophysics Data System (ADS)
Altsybeyev, V. V.
2016-12-01
The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.
Experimental considerations for testing antimatter antigravity using positronium 1S-2S spectroscopy
NASA Astrophysics Data System (ADS)
Crivelli, P.; Cooke, D. A.; Friedreich, S.
2014-05-01
In this contribution to the WAG 2013 workshop we report on the status of our measurement of the 1S-2S transition frequency of positronium. The aim of this experiment is to reach a precision of 0.5 ppb in order to cross check the QED calculations. After reviewing the current available sources of Ps, we consider laser cooling as a route to push the precision in the measurement down to 0.1 ppb. If such an uncertainty could be achieved, this would be sensitive to the gravitational redshift and therefore be able to assess the sign of gravity for antimatter.
LOFAR Lightning Imaging: Mapping Lightning With Nanosecond Precision
NASA Astrophysics Data System (ADS)
Hare, B. M.; Scholten, O.; Bonardi, A.; Buitink, S.; Corstanje, A.; Ebert, U.; Falcke, H.; Hörandel, J. R.; Leijnse, H.; Mitra, P.; Mulrey, K.; Nelles, A.; Rachen, J. P.; Rossetto, L.; Rutjes, C.; Schellart, P.; Thoudam, S.; Trinh, T. N. G.; ter Veen, S.; Winchen, T.
2018-03-01
Lightning mapping technology has proven instrumental in understanding lightning. In this work we present a pipeline that can use lightning observed by the LOw-Frequency ARray (LOFAR) radio telescope to construct a 3-D map of the flash. We show that LOFAR has unparalleled precision, on the order of meters, even for lightning flashes that are over 20 km outside the area enclosed by LOFAR antennas (˜3,200 km2), and can potentially locate over 10,000 sources per lightning flash. We also show that LOFAR is the first lightning mapping system that is sensitive to the spatial structure of the electrical current during individual lightning leader steps.
Comparative Study of button BPM Trapped Mode Heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron,P.; Singh, O.
2009-05-04
The combination of short bunches and high currents found in modern light sources and colliders can result in the deposition of tens of watts of power in BPM buttons. The resulting thermal distortion is potentially problematic for maintaining high precision beam position stability, and in the extreme case can result in mechanical damage. We present a simple algorithm that uses the input parameters of beam current, bunch length, button diameter, beampipe aperture, and fill pattern to calculate a relative figure-of-merit for button heating. Data for many of the world's light sources and colliders is compiled in a table. Using themore » algorithm, the table is sorted in order of the relative magnitude of button heating.« less
Rankin, Richard; Kotter, Dale
1994-01-01
An optical voltage reference for providing an alternative to a battery source. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
A Broad Bank Lidar for Precise Atmospheric CO2 Column Absorption Measurement from Space
NASA Technical Reports Server (NTRS)
Georgieva, E. M.; Heaps, W. S.; Huang, W.
2010-01-01
Accurate global measurement of carbon dioxide column with the aim of discovering and quantifying unknown sources and sinks has been a high priority for the last decade. In order to uncover the "missing sink" that is responsible for the large discrepancies in the budget the critical precision for a measurement from space needs to be on the order of 1 ppm. To better understand the CO2 budget and to evaluate its impact on global warming the National Research Council (NRC) in its recent decadal survey report (NACP) to NASA recommended a laser based total CO2 mapping mission in the near future. That's the goal of Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission - to significantly enhance the understanding of the role of CO2 in the global carbon cycle. Our current goal is to develop an ultra precise, inexpensive new lidar system for column measurements of CO2 changes in the lower atmosphere that uses a Fabry-Perot interferometer based system as the detector portion of the instrument and replaces the narrow band laser commonly used in lidars with a high power broadband source. This approach reduces the number of individual lasers used in the system and considerably reduces the risk of failure. It also tremendously reduces the requirement for wavelength stability in the source putting this responsibility instead on the Fabry- Perot subsystem.
Development of a novel low frequency GPR system for ultra-deep detection in Mine
NASA Astrophysics Data System (ADS)
Xu, Xianlei; Peng, Suping; Yang, Feng
2016-04-01
Mine disasters sources is the main source of the underground coal mine accidents in China. This paper describes the development of a novel explosion proof ground penetrating radar (GPR) for mine disasters sources detection, aiming to solve the current problems of the small detection range and low precision in the mine advanced detection in China. A high performance unipolar pulse transmitting unit is developed by using avalanche transistors, and an effective pulse excitation source network. And a new pluggable combined low-frequency antenna involving three frequencies with 12.5MHz, 25 MHz and 50MHz, is designed and developed. The plate-type structure is designed, aiming to enhance the directivity of the antenna, and the achievement of the antenna impedance matching is implemented in the feed point based on the extensions interface design, enhancing the antenna bandwidth and reducing the standing wave interference. Moreover, a high precision stepper delay circuit is designed by transforming the number of the operational amplifier step and using the differential compensation between the metal-oxide semiconductor field effect transistors, aiming to improve the accuracy of the signal acquisition system. In order to adapt to the mine environment, the explosion-proof design is implemented for the GPR system, including the host, transmitter, receiver, battery box, antenna, and other components.Mine detection experiments is carried out and the results show: the novel GPR system can effectively detect the location and depth of the geological disasters source with the depth greater than30 m and the diameter greater than 3m, the maximum detection depth can be up to 80m, which break the current detection depth limitations within 30m, providing an effective technical support for the ultra-deep mine disasters detection and the safety problems in coal mine production.
DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.
Dryden, Michael D M; Wheeler, Aaron R
2015-01-01
Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.
First β-ν correlation measurement from the recoil-energy spectrum of Penning trapped Ar35 ions
NASA Astrophysics Data System (ADS)
Van Gorp, S.; Breitenfeldt, M.; Tandecki, M.; Beck, M.; Finlay, P.; Friedag, P.; Glück, F.; Herlert, A.; Kozlov, V.; Porobic, T.; Soti, G.; Traykov, E.; Wauters, F.; Weinheimer, Ch.; Zákoucký, D.; Severijns, N.
2014-08-01
We demonstrate a novel method to search for physics beyond the standard model by determining the β-ν angular correlation from the recoil-ion energy distribution after β decay of ions stored in a Penning trap. This recoil-ion energy distribution is measured with a retardation spectrometer. The unique combination of the spectrometer with a Penning trap provides a number of advantages, e.g., a high recoil-ion count rate and low sensitivity to the initial position and velocity distribution of the ions and completely different sources of systematic errors compared to other state-of-the-art experiments. Results of a first measurement with the isotope Ar35 are presented. Although currently at limited precision, we show that a statistical precision of about 0.5% is achievable with this unique method, thereby opening up the possibility of contributing to state-of-the-art searches for exotic currents in weak interactions.
Radiological and Radionuclide Imaging of Degenerative Disease of the Facet Joints
Shur, Natalie; Corrigan, Alexis; Agrawal, Kanhaiyalal; Desai, Amidevi; Gnanasegaran, Gopinath
2015-01-01
The facet joint has been increasingly implicated as a potential source of lower back pain. Diagnosis can be challenging as there is not a direct correlation between facet joint disease and clinical or radiological features. The purpose of this article is to review the diagnosis, treatment, and current imaging modality options in the context of degenerative facet joint disease. We describe each modality in turn with a pictorial review using current evidence. Newer hybrid imaging techniques such as single photon emission computed tomography/computed tomography (SPECT/CT) provide additional information relative to the historic gold standard magnetic resonance imaging. The diagnostic benefits of SPECT/CT include precise localization and characterization of spinal lesions and improved diagnosis for lower back pain. It may have a role in selecting patients for local therapeutic injections, as well as guiding their location with increased precision. PMID:26170560
NASA Astrophysics Data System (ADS)
Yacovitch, Tara; Shorter, Joanne; Nelson, David; Herndon, Scott; Agnese, Mike; McManus, Barry; Zahniser, Mark
2017-04-01
In order to understand how and why methane (CH4 ) concentrations change over time, it is necessary to understand their sources and sinks. Stable isotope measurements of 13 CH4 :12 CH4 and CH3 D:12 CH4 ratios constrain the inventory of these sinks and sources. Current measurements often depend on Isotope Ratio Mass Spectrometry (IRMS), which requires extensive sample preparation including cryogenic separation of methane from air and subsequent conversion to either CO2 or H2 . Here, we detail improvements to a direct-absorption laser spectrometer that enable fast and precise measurements of methane isotope ratios (δ13 C and δ2 H ) of ambient air samples, without such sample preparation. The measurement system consists of a laser-based direct absorption spectrometer configured with a sample manifold for measurement of discrete samples (as opposed to flow-through measurements). Samples are trapped in the instrument using a rapid sample switching technique that compares each flask sample against a monitor tank sample. This approach reduces instrument drift and results in excellent precision. Precisions of 0.054 o/oo for δ13 C and 1.4 o/oo for δ2 H have been achieved (Allan-Werle deviations). These results are obtained in 20 minutes using 4 replicate comparisons to a monitor tank.
Luu, Phan; Essaki Arumugam, Easwara Moorthy; Anderson, Erik; Gunn, Amanda; Rech, Dennis; Turovets, Sergei; Tucker, Don M.
2016-01-01
In pain management as well as other clinical applications of neuromodulation, it is important to consider the timing parameters influencing activity-dependent plasticity, including pulsed versus sustained currents, as well as the spatial action of electrical currents as they polarize the complex convolutions of the cortical mantle. These factors are of course related; studying temporal factors is not possible when the spatial resolution of current delivery to the cortex is so uncertain to make it unclear whether excitability is increased or decreased with anodal vs. cathodal current flow. In the present study we attempted to improve the targeting of specific cortical locations by applying current through flexible source-sink configurations of 256 electrodes in a geodesic array. We constructed a precision electric head model for 12 healthy individuals. Extraction of the individual’s cortical surface allowed computation of the component of the induced current that is normal to the target cortical surface. In an effort to replicate the long-term depression (LTD) induced with pulsed protocols in invasive animal research and transcranial magnetic stimulation studies, we applied 100 ms pulses at 1.9 s intervals either in cortical-surface-anodal or cortical-surface-cathodal directions, with a placebo (sham) control. The results showed significant LTD of the motor evoked potential as a result of the cortical-surface-cathodal pulses in contrast to the placebo control, with a smaller but similar LTD effect for anodal pulses. The cathodal LTD after-effect was sustained over 90 min following current injection. These results support the feasibility of pulsed protocols with low total charge in non-invasive neuromodulation when the precision of targeting is improved with a dense electrode array and accurate head modeling. PMID:27531976
Churnside, Allison B; Sullan, Ruby May A; Nguyen, Duc M; Case, Sara O; Bull, Matthew S; King, Gavin M; Perkins, Thomas T
2012-07-11
Force drift is a significant, yet unresolved, problem in atomic force microscopy (AFM). We show that the primary source of force drift for a popular class of cantilevers is their gold coating, even though they are coated on both sides to minimize drift. Drift of the zero-force position of the cantilever was reduced from 900 nm for gold-coated cantilevers to 70 nm (N = 10; rms) for uncoated cantilevers over the first 2 h after wetting the tip; a majority of these uncoated cantilevers (60%) showed significantly less drift (12 nm, rms). Removing the gold also led to ∼10-fold reduction in reflected light, yet short-term (0.1-10 s) force precision improved. Moreover, improved force precision did not require extended settling; most of the cantilevers tested (9 out of 15) achieved sub-pN force precision (0.54 ± 0.02 pN) over a broad bandwidth (0.01-10 Hz) just 30 min after loading. Finally, this precision was maintained while stretching DNA. Hence, removing gold enables both routine and timely access to sub-pN force precision in liquid over extended periods (100 s). We expect that many current and future applications of AFM can immediately benefit from these improvements in force stability and precision.
Rankin, R.; Kotter, D.
1994-04-26
An optical voltage reference for providing an alternative to a battery source is described. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function. 2 figures.
NASA Astrophysics Data System (ADS)
Sánchez, Daniel; Kraus, F. Bernhard; Hernández, Manuel De Jesús; Vandame, Rémy
2007-07-01
Recruitment precision, i.e. the proportion of recruits that reach an advertised food source, is a crucial adaptation of social bees to their environment. Studies with honeybees showed that recruitment precision is not a fixed feature, but it may be enhanced by factors like experience and distance. However, little is known regarding the recruitment precision of stingless bees. Hence, in this study, we examined the effects of experience and spatial distance on the precision of the food communication system of the stingless bee Scaptotrigona mexicana. We conducted the experiments by training bees to a three-dimensional artificial patch at several distances from the colony. We recorded the choices of individual recruited foragers, either being newcomers (foragers without experience with the advertised food source) or experienced (foragers that had previously visited the feeder). We found that the average precision of newcomers (95.6 ± 2.61%) was significantly higher than that of experienced bees (80.2 ± 1.12%). While this might seem counter-intuitive on first sight, this “loss” of precision can be explained by the tendency of experienced recruits to explore nearby areas to find new rewarding food sources after they had initially learned the exact location of the food source. Increasing the distance from the colony had no significant effect on the precision of the foraging bees. Thus, our data show that experience, but not the distance of the food source, affected the patch precision of S. mexicana foragers.
NASA Astrophysics Data System (ADS)
Wang, Shilong; Yin, Changchun; Lin, Jun; Yang, Yu; Hu, Xueyan
2016-03-01
Cooperative work of multiple magnetic transmitting sources is a new trend in the development of transient electromagnetic system. The key is the bipolar current waves shutdown, concurrently in the inductive load. In the past, it was difficult to use the constant clamping voltage technique to realize the synchronized shutdown of currents with different peak values. Based on clamping voltage technique, we introduce a new controlling method with constant shutdown time. We use the rising time to control shutdown time and use low voltage power source to control peak current. From the viewpoint of the circuit energy loss, by taking the high-voltage capacitor bypass resistance and the capacitor of the passive snubber circuit into account, we establish the relationship between the rising time and the shutdown time. Since the switch is not ideal, we propose a new method to test the shutdown time by the low voltage, the high voltage and the peak current. Experimental results show that adjustment of the current rising time can precisely control the value of the clamp voltage. When the rising time is fixed, the shutdown time is unchanged. The error for shutdown time deduced from the energy consumption is less than 6%. The new controlling method on current shutdown proposed in this paper can be used in the cooperative work of borehole and ground transmitting system.
Wang, Shilong; Yin, Changchun; Lin, Jun; Yang, Yu; Hu, Xueyan
2016-03-01
Cooperative work of multiple magnetic transmitting sources is a new trend in the development of transient electromagnetic system. The key is the bipolar current waves shutdown, concurrently in the inductive load. In the past, it was difficult to use the constant clamping voltage technique to realize the synchronized shutdown of currents with different peak values. Based on clamping voltage technique, we introduce a new controlling method with constant shutdown time. We use the rising time to control shutdown time and use low voltage power source to control peak current. From the viewpoint of the circuit energy loss, by taking the high-voltage capacitor bypass resistance and the capacitor of the passive snubber circuit into account, we establish the relationship between the rising time and the shutdown time. Since the switch is not ideal, we propose a new method to test the shutdown time by the low voltage, the high voltage and the peak current. Experimental results show that adjustment of the current rising time can precisely control the value of the clamp voltage. When the rising time is fixed, the shutdown time is unchanged. The error for shutdown time deduced from the energy consumption is less than 6%. The new controlling method on current shutdown proposed in this paper can be used in the cooperative work of borehole and ground transmitting system.
Larson, Kristine M.; Poland, Michael; Miklius, Asta
2010-01-01
The global positioning system (GPS) is one of the most common techniques, and the current state of the art, used to monitor volcano deformation. In addition to slow (several centimeters per year) displacement rates, GPS can be used to study eruptions and intrusions that result in much larger (tens of centimeters over hours-days) displacements. It is challenging to resolve precise positions using GPS at subdaily time intervals because of error sources such as multipath and atmospheric refraction. In this paper, the impact of errors due to multipath and atmospheric refraction at subdaily periods is examined using data from the GPS network on Kīlauea Volcano, Hawai'i. Methods for filtering position estimates to enhance precision are both simulated and tested on data collected during the June 2007 intrusion and eruption. Comparisons with tiltmeter records show that GPS instruments can precisely recover the timing of the activity.
Problems, challenges and promises: perspectives on precision medicine.
Duffy, David J
2016-05-01
The 'precision medicine (systems medicine)' concept promises to achieve a shift to future healthcare systems with a more proactive and predictive approach to medicine, where the emphasis is on disease prevention rather than the treatment of symptoms. The individualization of treatment for each patient will be at the centre of this approach, with all of a patient's medical data being computationally integrated and accessible. Precision medicine is being rapidly embraced by biomedical researchers, pioneering clinicians and scientific funding programmes in both the European Union (EU) and USA. Precision medicine is a key component of both Horizon 2020 (the EU Framework Programme for Research and Innovation) and the White House's Precision Medicine Initiative. Precision medicine promises to revolutionize patient care and treatment decisions. However, the participants in precision medicine are faced with a considerable central challenge. Greater volumes of data from a wider variety of sources are being generated and analysed than ever before; yet, this heterogeneous information must be integrated and incorporated into personalized predictive models, the output of which must be intelligible to non-computationally trained clinicians. Drawing primarily from the field of 'oncology', this article will introduce key concepts and challenges of precision medicine and some of the approaches currently being implemented to overcome these challenges. Finally, this article also covers the criticisms of precision medicine overpromising on its potential to transform patient care. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
García-Magariño, Iván; Lacuesta, Raquel; Lloret, Jaime
2018-03-27
Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen's d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach.
2018-01-01
Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen’s d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach. PMID:29584703
Evaluation of DCS III Transmission Alternatives, Phase 1B.
1980-09-30
Most commonly used measure are straight and precision tubing, dielectric lining, and helix construction. These measures make the millimeter waveguide...channel tran- sistorized and microprocessor-controlled L5E. The broadband signal, either analog or digital, can be transmitted over a coaxial cable...kilowatts. One kind of mm source is travelling wave tubes ( TWT ) which are currently under development in the frequency range from 20 to 50 GHz with
Measuring Sizes & Shapes of Galaxies
NASA Astrophysics Data System (ADS)
Kusmic, Samir; Willemn Holwerda, Benne
2018-01-01
Software is how galaxy morphometrics are calculated, cutting down on time needed to categorize galaxies. However, new surveys coming in the next decade is expected to count upwards of a thousand times more galaxies than with current surveys. This issue would create longer time consumption just processing data. In this research, we looked into how we can reduce the time it takes to get morphometric parameters in order to classify galaxies, but also how precise we can get with other findings. The software of choice is Source Extractor, known for taking a short amount of time, as well as being recently updated to get compute morphometric parameters. This test is being done by running CANDELS data, five fields in the J and H filters, through Source Extractor and then cross-correlating the new catalog with one created with GALFIT, obtained from van der Wel et al. 2014, and then with spectroscopic redshift data. With Source Extractor, we look at how many galaxies counted, how precise the computation, how to classify morphometry, and how the results stand with other findings. The run-time was approximately 10 hours when cross-correlated with GALFIT and approximately 8 hours with the spectroscopic redshift; these were expected times as Source Extractor and already faster than GALFIT's run-time by a large factor. As well, Source Extractor's recovery was large: 79.24\\% of GALFIT's count. However, the precision is highly variable. We have created two thresholds to see which would be better in order to combat this;we ended up picking an unbiased isophotal area threshold as the better choice. Still, with such a threshold, spread was relatively wide. However, comparing the parameters with redshift showed agreeable findings, however, not necessarily to the numerical value. From the results, we see Source Extractor as a good first-look, to be followed up by other software.
Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...
2016-06-01
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
An Assessment of Imaging Informatics for Precision Medicine in Cancer.
Chennubhotla, C; Clarke, L P; Fedorov, A; Foran, D; Harris, G; Helton, E; Nordstrom, R; Prior, F; Rubin, D; Saltz, J H; Shalley, E; Sharma, A
2017-08-01
Objectives: Precision medicine requires the measurement, quantification, and cataloging of medical characteristics to identify the most effective medical intervention. However, the amount of available data exceeds our current capacity to extract meaningful information. We examine the informatics needs to achieve precision medicine from the perspective of quantitative imaging and oncology. Methods: The National Cancer Institute (NCI) organized several workshops on the topic of medical imaging and precision medicine. The observations and recommendations are summarized herein. Results: Recommendations include: use of standards in data collection and clinical correlates to promote interoperability; data sharing and validation of imaging tools; clinician's feedback in all phases of research and development; use of open-source architecture to encourage reproducibility and reusability; use of challenges which simulate real-world situations to incentivize innovation; partnership with industry to facilitate commercialization; and education in academic communities regarding the challenges involved with translation of technology from the research domain to clinical utility and the benefits of doing so. Conclusions: This article provides a survey of the role and priorities for imaging informatics to help advance quantitative imaging in the era of precision medicine. While these recommendations were drawn from oncology, they are relevant and applicable to other clinical domains where imaging aids precision medicine. Georg Thieme Verlag KG Stuttgart.
DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration
Dryden, Michael D. M.; Wheeler, Aaron R.
2015-01-01
Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as “black boxes,” giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat’s voltammetric measurements are much more sensitive than those of “CheapStat” (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial “black box” potentiostat. Likewise, in head-to-head tests, DStat’s potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the “open source” movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools. PMID:26510100
A NOAA/SWPC Perspective on Space Weather Forecasts That Fail
NASA Astrophysics Data System (ADS)
Biesecker, D. A.
2014-12-01
The Space Weather Prediction Center (SWPC) at NOAA is the Official US source for space weather watches, warning and alerts. These alerts are provided to a breadth of customers covering a range of industries, including electric utilities, airlines, emergency managers, and users of precision GPS to name a few. This talk will review the current tools used by SWPC to forecast geomagnetic storms, solar flares, and solar energetic particle events and present the SWPC performance in each of these areas. We will include a discussion of the current limitations and examples of events that proved difficult to forecast.
Analysis and application of two-current-source circuit as a signal conditioner for resistive sensors
NASA Astrophysics Data System (ADS)
Idzkowski, Adam; Gołębiowski, Jerzy; Walendziuk, Wojciech
2017-05-01
The article presents the analysis of metrological properties of a two-current-source supplied circuit. It includes such data as precise and simplified equations for two circuit output voltages in the function of relative resistance increments of sensors. Moreover, graphs showing nonlinearity coefficients of both output voltages for two resistance increments varying widely are presented. Graphs of transfer resistances, depending on relative increments of sensors resistance were also created. The article also contains a description of bridge-based circuit realization with the use of a computer and a data acquisition (DAQ) card. Laboratory measurement of the difference and sum of relative resistance increments of two resistance decade boxes were carried out indirectly with the use of the created measurement system. Measurement errors were calculated and included in the article, as well.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000
Lattice Calculations and the Muon Anomalous Magnetic Moment
NASA Astrophysics Data System (ADS)
Marinković, Marina Krstić
2017-07-01
Anomalous magnetic moment of the muon, a_{μ }=(g_{μ }-2)/2, is one of the most precisely measured quantities in particle physics and it provides a stringent test of the Standard Model. The planned improvements of the experimental precision at Fermilab and at J-PARC propel further reduction of the theoretical uncertainty of a_{μ }. The hope is that the efforts on both sides will help resolve the current discrepancy between the experimental measurement of a_{μ } and its theoretical prediction, and potentially gain insight into new physics. The dominant sources of the uncertainty in the theoretical prediction of a_{μ } are the errors of the hadronic contributions. I will discuss recent progress on determination of hadronic contributions to a_{μ } from lattice calculations.
Combining Induced Pluripotent Stem Cells and Genome Editing Technologies for Clinical Applications.
Chang, Chia-Yu; Ting, Hsiao-Chien; Su, Hong-Lin; Jeng, Jing-Ren
2018-01-01
In this review, we introduce current developments in induced pluripotent stem cells (iPSCs), site-specific nuclease (SSN)-mediated genome editing tools, and the combined application of these two novel technologies in biomedical research and therapeutic trials. The sustainable pluripotent property of iPSCs in vitro not only provides unlimited cell sources for basic research but also benefits precision medicines for human diseases. In addition, rapidly evolving SSN tools efficiently tailor genetic manipulations for exploring gene functions and can be utilized to correct genetic defects of congenital diseases in the near future. Combining iPSC and SSN technologies will create new reliable human disease models with isogenic backgrounds in vitro and provide new solutions for cell replacement and precise therapies.
NASA Astrophysics Data System (ADS)
Borisov, V. M.; Vinokhodov, A. Yu; Ivanov, A. S.; Kiryukhin, Yu B.; Mishchenko, V. A.; Prokof'ev, A. V.; Khristoforov, O. B.
2009-10-01
The development of high-power discharge sources emitting in the 13.5±0.135-nm spectral band is of current interest because they are promising for applications in industrial EUV (extreme ultraviolet) lithography for manufacturing integrated circuits according to technological precision standards of 22 nm and smaller. The parameters of EUV sources based on a laser-induced discharge in tin vapours between rotating disc electrodes are investigated. The properties of the discharge initiation by laser radiation at different wavelengths are established and the laser pulse parameters providing the maximum energy characteristics of the EUV source are determined. The EUV source developed in the study emits an average power of 276 W in the 13.5±0.135-nm spectral band on conversion to the solid angle 2π sr in the stationary regime at a pulse repetition rate of 3000 Hz.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Adaptive electron beam shaping using a photoemission gun and spatial light modulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxson, Jared; Lee, Hyeri; Bartnik, Adam C.
The need for precisely defined beam shapes in photoelectron sources has been well established. In this paper, we use a spatial light modulator and simple shaping algorithm to create arbitrary, detailed transverse laser shapes with high fidelity. We transmit this shaped laser to the photocathode of a high voltage dc gun. Using beam currents where space charge is negligible, and using an imaging solenoid and fluorescent viewscreen, we show that the resultant beam shape preserves these detailed features with similar fidelity. Next, instead of transmitting a shaped laser profile, we use an active feedback on the unshaped electron beam imagemore » to create equally accurate and detailed shapes. We demonstrate that this electron beam feedback has the added advantage of correcting for electron optical aberrations, yielding shapes without skew. The method may serve to provide precisely defined electron beams for low current target experiments, space-charge dominated beam commissioning, as well as for online adaptive correction of photocathode quantum efficiency degradation.« less
Adaptive electron beam shaping using a photoemission gun and spatial light modulator
NASA Astrophysics Data System (ADS)
Maxson, Jared; Lee, Hyeri; Bartnik, Adam C.; Kiefer, Jacob; Bazarov, Ivan
2015-02-01
The need for precisely defined beam shapes in photoelectron sources has been well established. In this paper, we use a spatial light modulator and simple shaping algorithm to create arbitrary, detailed transverse laser shapes with high fidelity. We transmit this shaped laser to the photocathode of a high voltage dc gun. Using beam currents where space charge is negligible, and using an imaging solenoid and fluorescent viewscreen, we show that the resultant beam shape preserves these detailed features with similar fidelity. Next, instead of transmitting a shaped laser profile, we use an active feedback on the unshaped electron beam image to create equally accurate and detailed shapes. We demonstrate that this electron beam feedback has the added advantage of correcting for electron optical aberrations, yielding shapes without skew. The method may serve to provide precisely defined electron beams for low current target experiments, space-charge dominated beam commissioning, as well as for online adaptive correction of photocathode quantum efficiency degradation.
Adaptive electron beam shaping using a photoemission gun and spatial light modulator
Maxson, Jared; Lee, Hyeri; Bartnik, Adam C.; ...
2015-02-01
The need for precisely defined beam shapes in photoelectron sources has been well established. In this paper, we use a spatial light modulator and simple shaping algorithm to create arbitrary, detailed transverse laser shapes with high fidelity. We transmit this shaped laser to the photocathode of a high voltage dc gun. Using beam currents where space charge is negligible, and using an imaging solenoid and fluorescent viewscreen, we show that the resultant beam shape preserves these detailed features with similar fidelity. Next, instead of transmitting a shaped laser profile, we use an active feedback on the unshaped electron beam imagemore » to create equally accurate and detailed shapes. We demonstrate that this electron beam feedback has the added advantage of correcting for electron optical aberrations, yielding shapes without skew. The method may serve to provide precisely defined electron beams for low current target experiments, space-charge dominated beam commissioning, as well as for online adaptive correction of photocathode quantum efficiency degradation.« less
NASA Astrophysics Data System (ADS)
Korenev, Sergey; Sikolenko, Vadim
2004-09-01
The advantage of neutron-scattering studies as compared to the standard X-ray technique is the high penetration of neutrons that allow us to study volume effects. The high resolution of instrumentation on the basis neutron scattering allows measurement of the parameters of lattice structure with high precision. We suggest the use of neutron scattering from pulsed neutron sources for analysis of materials irradiated with pulsed high current electron and ion beams. The results of preliminary tests using this method for Ni foils that have been studied by neutron diffraction at the IBR-2 (Pulsed Fast Reactor at Joint Institute for Nuclear Research) are presented.
On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy
NASA Astrophysics Data System (ADS)
Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.
2017-10-01
High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.
Precision blackbody sources for radiometric standards.
Sapritsky, V I; Khlevnoy, B B; Khromchenko, V B; Lisiansky, B E; Mekhontsev, S N; Melenevsky, U A; Morozova, S P; Prokhorov, A V; Samoilov, L N; Shapoval, V I; Sudarev, K A; Zelener, M F
1997-08-01
The precision blackbody sources developed at the All-Russian Institute for Optical and Physical Measurements (Moscow, Russia) and their characteristics are analyzed. The precision high-temperature graphite blackbody BB22p, large-area high-temperature pyrolytic graphite blackbody BB3200pg, middle-temperature graphite blackbody BB2000, low-temperature blackbody BB300, and gallium fixed-point blackbody BB29gl and their characteristics are described.
Advanced Compton scattering light source R&D at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, F; Anderson, S G; Anderson, G
2010-02-16
We report the design and current status of a monoenergetic laser-based Compton scattering 0.5-2.5 MeV {gamma}-ray source. Previous nuclear resonance fluorescence results and future linac and laser developments for the source are presented. At MeV photon energies relevant for nuclear processes, Compton scattering light sources are attractive because of their relative compactness and improved brightness above 100 keV, compared to typical 4th generation synchrotrons. Recent progress in accelerator physics and laser technology have enabled the development of a new class of tunable Mono-Energetic Gamma-Ray (MEGa-Ray) light sources based on Compton scattering between a high-brightness, relativistic electron beam and a highmore » intensity laser pulse produced via chirped-pulse amplification (CPA). A new precision, tunable gamma-ray source driven by a compact, high-gradient X-band linac is currently under development and construction at LLNL. High-brightness, relativistic electron bunches produced by an X-band linac designed in collaboration with SLAC will interact with a Joule-class, 10 ps, diode-pumped CPA laser pulse to generate tunable {gamma}-rays in the 0.5-2.5 MeV photon energy range via Compton scattering. Based on the success of the previous Thomson-Radiated Extreme X-rays (T-REX) Compton scattering source at LLNL, the source will be used to excite nuclear resonance fluorescence lines in various isotopes; applications include homeland security, stockpile science and surveillance, nuclear fuel assay, and waste imaging and assay. After a brief presentation of successful nuclear resonance fluorescence (NRF) experiments done with T-REX, the new source design, key parameters, and current status are presented.« less
DC current distribution mapping system of the solar panels using a HTS-SQUID gradiometer
NASA Astrophysics Data System (ADS)
Miyazaki, Shingo; Kasuya, Syohei; Mawardi Saari, Mohd; Sakai, Kenji; Kiwa, Toshihiko; Tsukamoto, Akira; Adachi, Seiji; Tanabe, Keiichi; Tsukada, Keiji
2014-05-01
Solar panels are expected to play a major role as a source of sustainable energy. In order to evaluate solar panels, non-destructive tests, such as defect inspections and response property evaluations, are necessary. We developed a DC current distribution mapping system of the solar panels using a High Critical Temperature Superconductor Superconducting Quantum Interference Device (HTS-SQUID) gradiometer with ramp edge type Josephson junctions. Two independent components of the magnetic fields perpendicular to the panel surface (∂Bz/∂x, ∂Bz/∂y) were detected. The direct current of the solar panel is visualized by calculating the composition of the two signal components, the phase angle, and mapping the DC current vector. The developed system can evaluate the uniformity of DC current distributions precisely and may be applicable for defect detection of solar panels.
NASA Astrophysics Data System (ADS)
Anglada-Escudé, G.; Torra, J.
2006-04-01
Context: .Very precise planned space astrometric missions and recent improvements in imaging capabilities require a detailed review of the assumptions of classical astrometric modeling.Aims.We show that Light-Travel Time must be taken into account in modeling the kinematics of astronomical objects in nonlinear motion, even at stellar distances.Methods.A closed expression to include Light-Travel Time in the current astrometric models with nonlinear motion is provided. Using a perturbative approach the expression of the Light-Travel Time signature is derived. We propose a practical form of the astrometric modelling to be applied in astrometric data reduction of sources at stellar distances(d>1 pc).Results.We show that the Light-Travel Time signature is relevant at μ as accuracy (or even at mas) depending on the time span of the astrometric measurements. We explain how information on the radial motion of a source can be obtained. Some estimates are provided for known nearby binary systemsConclusions.Given the obtained results, it is clear that this effect must be taken into account in interpreting precise astrometric measurements. The effect is particularly relevant in measurements performed by the planned astrometric space missions (GAIA, SIM, JASMINE, TPF/DARWIN). An objective criterion is provided to quickly evaluate whether the Light-Travel Time modeling is required for a given source or system.
Precise measurement of the angular correlation parameter aβν in the β decay of 35Ar with LPCTrap
NASA Astrophysics Data System (ADS)
Fabian, X.; Ban, G.; Boussaïd, R.; Breitenfeldt, M.; Couratin, C.; Delahaye, P.; Durand, D.; Finlay, P.; Fléchard, X.; Guillon, B.; Lemière, Y.; Leredde, A.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pierre, E.; Porobic, T.; Quéméner, G.; Rodríguez, D.; Severijns, N.; Thomas, J. C.; Van Gorp, S.
2014-03-01
Precise measurements in the β decay of the 35Ar nucleus enable to search for deviations from the Standard Model (SM) in the weak sector. These measurements enable either to check the CKM matrix unitarity or to constrain the existence of exotic currents rejected in the V-A theory of the SM. For this purpose, the β-ν angular correlation parameter, aβν, is inferred from a comparison between experimental and simulated recoil ion time-of-flight distributions following the quasi-pure Fermi transition of 35Ar1+ ions confined in the transparent Paul trap of the LPCTrap device at GANIL. During the last experiment, 1.5×106 good events have been collected, which corresponds to an expected precision of less than 0.5% on the aβν value. The required simulation is divided between the use of massive GPU parallelization and the GEANT4 toolkit for the source-cloud kinematics and the tracking of the decay products.
Wei, Fang; Lu, Bin; Wang, Jian; Xu, Dan; Pan, Zhengqing; Chen, Dijun; Cai, Haiwen; Qu, Ronghui
2015-02-23
A precision and broadband laser frequency swept technique is experimentally demonstrated. Using synchronous current compensation, a slave diode laser is dynamically injection-locked to a specific high-order modulation-sideband of a narrow-linewidth master laser modulated by an electro-optic modulator (EOM), whose driven radio frequency (RF) signal can be agilely, precisely controlled by a frequency synthesizer, and the high-order modulation-sideband enables multiplied sweep range and tuning rate. By using 5th order sideband injection-locking, the original tuning range of 3 GHz and tuning rate of 0.5 THz/s is multiplied by 5 times to 15 GHz and 2.5 THz/s respectively. The slave laser has a 3 dB-linewidth of 2.5 kHz which is the same to the master laser. The settling time response of a 10 MHz frequency switching is 2.5 µs. By using higher-order modulation-sideband and optimized experiment parameters, an extended sweep range and rate could be expected.
Anchoring historical sequences using a new source of astro-chronological tie-points
NASA Astrophysics Data System (ADS)
Dee, Michael W.; Pope, Benjamin J. S.
2016-08-01
The discovery of past spikes in atmospheric radiocarbon activity, caused by major solar energetic particle events, has opened up new possibilities for high-precision chronometry. The two spikes, or Miyake Events, have now been widely identified in tree-rings that grew in the years 775 and 994 CE. Furthermore, all other plant material that grew in these years would also have incorporated the anomalously high concentrations of radiocarbon. Crucially, some plant-based artefacts, such as papyrus documents, timber beams and linen garments, can also be allocated to specific positions within long, currently unfixed, historical sequences. Thus, Miyake Events represent a new source of tie-points that could provide the means for anchoring early chronologies to the absolute timescale. Here, we explore this possibility, outlining the most expeditious approaches, the current challenges and obstacles, and how they might best be overcome.
Numerical Relativity for Space-Based Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Baker, John G.
2011-01-01
In the next decade, gravitational wave instruments in space may provide high-precision measurements of gravitational-wave signals from strong sources, such as black holes. Currently variations on the original Laser Interferometer Space Antenna mission concepts are under study in the hope of reducing costs. Even the observations of a reduced instrument may place strong demands on numerical relativity capabilities. Possible advances in the coming years may fuel a new generation of codes ready to confront these challenges.
A fully automated temperature-dependent resistance measurement setup using van der Pauw method
NASA Astrophysics Data System (ADS)
Pandey, Shivendra Kumar; Manivannan, Anbarasu
2018-03-01
The van der Pauw (VDP) method is widely used to identify the resistance of planar homogeneous samples with four contacts placed on its periphery. We have developed a fully automated thin film resistance measurement setup using the VDP method with the capability of precisely measuring a wide range of thin film resistances from few mΩ up to 10 GΩ under controlled temperatures from room-temperature up to 600 °C. The setup utilizes a robust, custom-designed switching network board (SNB) for measuring current-voltage characteristics automatically at four different source-measure configurations based on the VDP method. Moreover, SNB is connected with low noise shielded coaxial cables that reduce the effect of leakage current as well as the capacitance in the circuit thereby enhancing the accuracy of measurement. In order to enable precise and accurate resistance measurement of the sample, wide range of sourcing currents/voltages are pre-determined with the capability of auto-tuning for ˜12 orders of variation in the resistances. Furthermore, the setup has been calibrated with standard samples and also employed to investigate temperature dependent resistance (few Ω-10 GΩ) measurements for various chalcogenide based phase change thin films (Ge2Sb2Te5, Ag5In5Sb60Te30, and In3SbTe2). This setup would be highly helpful for measurement of temperature-dependent resistance of wide range of materials, i.e., metals, semiconductors, and insulators illuminating information about structural change upon temperature as reflected by change in resistances, which are useful for numerous applications.
Impact of Atmospheric Chromatic Effects on Weak Lensing Measurements
NASA Astrophysics Data System (ADS)
Meyers, Joshua E.; Burchat, Patricia R.
2015-07-01
Current and future imaging surveys will measure cosmic shear with statistical precision that demands a deeper understanding of potential systematic biases in galaxy shape measurements than has been achieved to date. We use analytic and computational techniques to study the impact on shape measurements of two atmospheric chromatic effects for ground-based surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope (LSST): (1) atmospheric differential chromatic refraction and (2) wavelength dependence of seeing. We investigate the effects of using the point-spread function (PSF) measured with stars to determine the shapes of galaxies that have different spectral energy distributions than the stars. We find that both chromatic effects lead to significant biases in galaxy shape measurements for current and future surveys, if not corrected. Using simulated galaxy images, we find a form of chromatic “model bias” that arises when fitting a galaxy image with a model that has been convolved with a stellar, instead of galactic, PSF. We show that both forms of atmospheric chromatic biases can be predicted (and corrected) with minimal model bias by applying an ordered set of perturbative PSF-level corrections based on machine-learning techniques applied to six-band photometry. Catalog-level corrections do not address the model bias. We conclude that achieving the ultimate precision for weak lensing from current and future ground-based imaging surveys requires a detailed understanding of the wavelength dependence of the PSF from the atmosphere, and from other sources such as optics and sensors. The source code for this analysis is available at https://github.com/DarkEnergyScienceCollaboration/chroma.
Design of an holographic off-axis calibration light source for ARGOS at the LBT
NASA Astrophysics Data System (ADS)
Schwab, Christian; Gassler, Wolfgang; Peter, Diethard; Blumchen, Thomas; Aigner, Simon; Quirrenbach, Andreas
We report on the design of an artificial light source for ARGOS, the multiple Rayleigh laser guide star (LGS) facility at the Large Binocular Telescope (LBT). Our light source mimics the expected night-time illumination of the adaptive secondary mirror (ASM) by the laser beacons very accurately and provides a way to check the achieved performance, allowing thorough testing of the system during day time. The optical design makes use of computer generated holograms (CGH) and strong aspheres to achieve a very small residual wavefront error. Additional structures on the CGH facilitate quick and precise alignment of the optics in the prime focus. We demonstrate that the scheme can be applied to the current European Extremely Large Telescope (E-ELT) design in a similar way.
Hydrodynamic and material properties experiments using pulsed power techniques
NASA Astrophysics Data System (ADS)
Reinovsky, R. E.; Trainor, R. J.
2000-04-01
Within the last five years, a new approach to the exploration of dynamic material properties and advanced hydrodynamics at extreme conditions has joined the traditional techniques of high velocity guns and explosives. This new application uses electromagnetic energy to accelerate solid density material to produce shocks in a cylindrical target. The principal tool for producing high energy density environments is the high precision, magnetically imploded, near-solid density cylindrical liner. The most attractive pulsed power system for driving such experiments is an ultrahigh current, low impedance, microsecond time scale source that is economical both to build and to operate. Two families of pulsed power systems can be applied to drive such experiments. The 25-MJ Atlas capacitor bank system currently under construction at Los Alamos is the first system of its scale specifically designed to drive high precision solid liners. Delivering 30 MA, Atlas will provide liner velocities 12-15 km/sec and kinetic energies of 1-2 MJ/cm with extensive diagnostics and excellent reproducibility. Explosive flux compressor technology provides access to currents exceeding 100 MA producing liner velocities above 25 km/sec and kinetic energies of 5-20 MJ/cm in single shot operations
Interactions Between Genetics, Lifestyle, and Environmental Factors for Healthcare.
Lin, Yuxin; Chen, Jiajia; Shen, Bairong
2017-01-01
The occurrence and progression of diseases are strongly associated with a combination of genetic, lifestyle, and environmental factors. Understanding the interplay between genetic and nongenetic components provides deep insights into disease pathogenesis and promotes personalized strategies for people healthcare. Recently, the paradigm of systems medicine, which integrates biomedical data and knowledge at multidimensional levels, is considered to be an optimal way for disease management and clinical decision-making in the era of precision medicine. In this chapter, epigenetic-mediated genetics-lifestyle-environment interactions within specific diseases and different ethnic groups are systematically discussed, and data sources, computational models, and translational platforms for systems medicine research are sequentially presented. Moreover, feasible suggestions on precision healthcare and healthy longevity are kindly proposed based on the comprehensive review of current studies.
Broek, Taylor A B; Walker, Brett D; Andreasen, Dyke H; McCarthy, Matthew D
2013-11-15
Compound-specific isotope analysis of individual amino acids (CSI-AA) is a powerful new tool for tracing nitrogen (N) source and transformation in biogeochemical cycles. Specifically, the δ(15)N value of phenylalanine (δ(15)N(Phe)) represents an increasingly used proxy for source δ(15)N signatures, with particular promise for paleoceanographic applications. However, current derivatization/gas chromatography methods require expensive and relatively uncommon instrumentation, and have relatively low precision, making many potential applications impractical. A new offline approach has been developed for high-precision δ(15)N measurements of amino acids (δ(15)N(AA)), optimized for δ(15)N(Phe) values. Amino acids (AAs) are first purified via high-pressure liquid chromatography (HPLC), using a mixed-phase column and automated fraction collection. The δ(15)N values are determined via offline elemental analyzer-isotope ratio mass spectrometry (EA-IRMS). The combined HPLC/EA-IRMS method separated most protein AAs with sufficient resolution to obtain accurate δ(15)N values, despite significant intra-peak isotopic fractionation. For δ(15)N(Phe) values, the precision was ±0.16‰ for standards, 4× better than gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS; ±0.64‰). We also compared a δ(15)N(Phe) paleo-record from a deep-sea bamboo coral from Monterey Bay, CA, USA, using our method versus GC/C/IRMS. The two methods produced equivalent δ(15)N(Phe) values within error; however, the δ(15)N(Phe) values from HPLC/EA-IRMS had approximately twice the precision of GC/C/IRMS (average stdev of 0.27‰ ± 0.14‰ vs 0.60‰ ± 0.20‰, respectively). These results demonstrate that offline HPLC represents a viable alternative to traditional GC/C/IMRS for δ(15)N(AA) measurement. HPLC/EA-IRMS is more precise and widely available, and therefore useful in applications requiring increased precision for data interpretation (e.g. δ(15)N paleoproxies). Copyright © 2013 John Wiley & Sons, Ltd.
Open-source do-it-yourself multi-color fluorescence smartphone microscopy
Sung, Yulung; Campa, Fernando; Shih, Wei-Chuan
2017-01-01
Fluorescence microscopy is an important technique for cellular and microbiological investigations. Translating this technique onto a smartphone can enable particularly powerful applications such as on-site analysis, on-demand monitoring, and point-of-care diagnostics. Current fluorescence smartphone microscope setups require precise illumination and imaging alignment which altogether limit its broad adoption. We report a multi-color fluorescence smartphone microscope with a single contact lens-like add-on lens and slide-launched total-internal-reflection guided illumination for three common tasks in investigative fluorescence microscopy: autofluorescence, fluorescent stains, and immunofluorescence. The open-source, simple and cost-effective design has the potential for do-it-yourself fluorescence smartphone microscopy. PMID:29188104
Chemical O‐Glycosylations: An Overview
2016-01-01
Abstract The development of glycobiology relies on the sources of particular oligosaccharides in their purest forms. As the isolation of the oligosaccharide structures from natural sources is not a reliable option for providing samples with homogeneity, chemical means become pertinent. The growing demand for diverse oligosaccharide structures has prompted the advancement of chemical strategies to stitch sugar molecules with precise stereo‐ and regioselectivity through the formation of glycosidic bonds. This Review will focus on the key developments towards chemical O‐glycosylations in the current century. Synthesis of novel glycosyl donors and acceptors and their unique activation for successful glycosylation are discussed. This Review concludes with a summary of recent developments and comments on future prospects. PMID:27777833
Isotopic Analysis and Evolved Gases
NASA Technical Reports Server (NTRS)
Swindle, Timothy D.; Boynton, William V.; Chutjian, Ara; Hoffman, John H.; Jordan, Jim L.; Kargel, Jeffrey S.; McEntire, Richard W.; Nyquist, Larry
1996-01-01
Precise measurements of the chemical, elemental, and isotopic composition of planetary surface material and gases, and observed variations in these compositions, can contribute significantly to our knowledge of the source(s), ages, and evolution of solar system materials. The analyses discussed in this paper are mostly made by mass spectrometers or some other type of mass analyzer, and address three broad areas of interest: (1) atmospheric composition - isotopic, elemental, and molecular, (2) gases evolved from solids, and (3) solids. Current isotopic data on nine elements, mostly from in situ analysis, but also from meteorites and telescopic observations are summarized. Potential instruments for isotopic analysis of lunar, Martian, Venusian, Mercury, and Pluto surfaces, along with asteroid, cometary and icy satellites, surfaces are discussed.
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Ion source for high-precision mass spectrometry
Todd, Peter J.; McKown, Henry S.; Smith, David H.
1984-01-01
The invention is directed to a method for increasing the precision of positive-ion relative abundance measurements conducted in a sector mass spectrometer having an ion source for directing a beam of positive ions onto a collimating slit. The method comprises incorporating in the source an electrostatic lens assembly for providing a positive-ion beam of circular cross section for collimation by the slit.
Field-Induced and Thermal Electron Currents from Earthed Spherical Emitters
NASA Astrophysics Data System (ADS)
Holgate, J. T.; Coppins, M.
2017-04-01
The theories of electron emission from planar surfaces are well understood, but they are not suitable for describing emission from spherical surfaces; their incorrect application to highly curved, nanometer-scale surfaces can overestimate the emitted current by several orders of magnitude. This inaccuracy is of particular concern for describing modern nanoscale electron sources, which continue to be modeled using the planar equations. In this paper, the field-induced and thermal currents are treated in a unified way to produce Fowler-Nordheim-type and Richardson-Schottky-type equations for the emitted current density from earthed nanoscale spherical surfaces. The limits of applicability of these derived expressions are considered along with the energy spectra of the emitted electrons. Within the relevant limits of validity, these equations are shown to reproduce the results of precise numerical calculations of the emitted current densities. The methods used here are adaptable to other one-dimensional emission problems.
2010-08-01
available). It is assumed after this method is formally published that various standard vendors will offer other sources than the current single standard... single isomer. D Alkyl PAHs used to determine the SPME-GC/MS relative response factors including alkyl naphthalenes (1-methyl-, 2-methyl-, 1,2...Flag all compound results in the sample which were estimated above the upper calibration level with an “E” qualifier. 15. Precision and Bias 15.1 Single
DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2015-01-01
Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827
NASA Astrophysics Data System (ADS)
Serebrov, A. P.
2018-03-01
The use of ultracold neutrons opens unique possibilities for studying fundamental interactions in particles physics. Searches for the neutron electric dipole moment are aimed at testing models of CP violation. A precise measurement of the neutron lifetime is of paramount importance for cosmology and astrophysics. Considerable advances in these realms can be made with the aid of a new ultracold-neutron (UCN) supersource presently under construction at Petersburg Nuclear Physics Institute. With this source, it would be possible to obtain an UCN density approximately 100 times as high as that at currently the best UCN source at the high-flux reactor of the Institute Laue-Langevin (ILL, Grenoble, France). To date, the design and basic elements of the source have been prepared, tests of a full-scale source model have been performed, and the research program has been developed. It is planned to improve accuracy in measuring the neutron electric dipole moment by one order of magnitude to a level of 10-27 to 10-28 e cm. This is of crucial importance for particle physics. The accuracy in measuring the neutron lifetime can also be improved by one order of magnitude. Finally, experiments that would seek neutron-antineutron oscillations by employing ultracold neutrons will become possible upon reaching an UCN density of 103 to 104 cm-3. The current status of the source and the proposed research program are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahadevan, Suvrath; Halverson, Samuel; Ramsey, Lawrence
2014-05-01
Modal noise in optical fibers imposes limits on the signal-to-noise ratio (S/N) and velocity precision achievable with the next generation of astronomical spectrographs. This is an increasingly pressing problem for precision radial velocity spectrographs in the near-infrared (NIR) and optical that require both high stability of the observed line profiles and high S/N. Many of these spectrographs plan to use highly coherent emission-line calibration sources like laser frequency combs and Fabry-Perot etalons to achieve precision sufficient to detect terrestrial-mass planets. These high-precision calibration sources often use single-mode fibers or highly coherent sources. Coupling light from single-mode fibers to multi-mode fibersmore » leads to only a very low number of modes being excited, thereby exacerbating the modal noise measured by the spectrograph. We present a commercial off-the-shelf solution that significantly mitigates modal noise at all optical and NIR wavelengths, and which can be applied to spectrograph calibration systems. Our solution uses an integrating sphere in conjunction with a diffuser that is moved rapidly using electrostrictive polymers, and is generally superior to most tested forms of mechanical fiber agitation. We demonstrate a high level of modal noise reduction with a narrow bandwidth 1550 nm laser. Our relatively inexpensive solution immediately enables spectrographs to take advantage of the innate precision of bright state-of-the art calibration sources by removing a major source of systematic noise.« less
Ion source for high-precision mass spectrometry
Todd, P.J.; McKown, H.S.; Smith, D.H.
1982-04-26
The invention is directed to a method for increasing the precision of positive-ion relative abundance measurements conducted in a sector mass spectrometer having an ion source for directing a beam of positive ions onto a collimating slit. The method comprises incorporating in the source an electrostatic lens assembly for providing a positive-ion beam of circular cross section for collimation by the slit. 2 figures, 3 tables.
A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies.
Puce, Aina; Hämäläinen, Matti S
2017-05-31
Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed.
Resolving discrete pulsar spin-down states with current and future instrumentation
NASA Astrophysics Data System (ADS)
Shaw, B.; Stappers, B. W.; Weltevrede, P.
2018-04-01
An understanding of pulsar timing noise offers the potential to improve the timing precision of a large number of pulsars as well as facilitating our understanding of pulsar magnetospheres. For some sources, timing noise is attributable to a pulsar switching between two different spin-down rates (\\dot{ν }). Such transitions may be common but difficult to resolve using current techniques. In this work, we use simulations of \\dot{ν }-variable pulsars to investigate the likelihood of resolving individual \\dot{ν } transitions. We inject step changes in the value of \\dot{ν } with a wide range of amplitudes and switching time-scales. We then attempt to redetect these transitions using standard pulsar timing techniques. The pulse arrival-time precision and the observing cadence are varied. Limits on \\dot{ν } detectability based on the effects such transitions have on the timing residuals are derived. With the typical cadences and timing precision of current timing programmes, we find that we are insensitive to a large region of Δ \\dot{ν } parameter space that encompasses small, short time-scale switches. We find, where the rotation and emission states are correlated, that using changes to the pulse shape to estimate \\dot{ν } transition epochs can improve detectability in certain scenarios. The effects of cadence on Δ \\dot{ν } detectability are discussed, and we make comparisons with a known population of intermittent and mode-switching pulsars. We conclude that for short time-scale, small switches, cadence should not be compromised when new generations of ultra-sensitive radio telescopes are online.
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
Metrological-grade tunable coherent source in the mid-infrared for molecular precision spectroscopy
NASA Astrophysics Data System (ADS)
Insero, G.; Clivati, C.; D'Ambrosio, D.; Cancio Pastor, P.; Verde, M.; Schunemann, P. G.; Zondy, J.-J.; Inguscio, M.; Calonico, D.; Levi, F.; De Natale, P.; Santambrogio, G.; Borri, S.
2018-02-01
We report on a metrological-grade mid-IR source with a 10-14 short-term instability for high-precision spectroscopy. Our source is based on the combination of a quantum cascade laser and a coherent radiation obtained by difference-frequency generation in an orientation-patterned gallium phosphide (OP-GaP) crystal. The pump and signal lasers are locked to an optical frequency comb referenced to the primary frequency standard via an optical fiber link. We demonstrate the robustness of the apparatus by measuring a vibrational transition around 6 μm on a metastable state of CO molecuels with 11 digits of precision.
NASA Astrophysics Data System (ADS)
Davis, K. J.; Keller, K.; Ogle, S. M.; Smith, S.
2014-12-01
Changes in the sources and sinks of greenhouse gases (GHGs) are key drivers of anthropogenic climate change. It is hence not surprising that current and emerging U.S. governmental science priorities and programs focused on climate change (e.g. a U.S. Carbon Cycle Science Plan; the U.S. Carbon Cycle Science Program, the U.S. Global Change Research Program, Executive Order 13653 'Preparing the U.S. for the Impacts of Climate Change') all call for an improved understanding of these sources and sinks.. Measurements of the total atmospheric burden of these gases are well established, but measurements of their sources and sinks are difficult to make over spatial and temporal scales that are relevant for scientific and decisionmaking needs. Quantifying the uncertainty in these measurements is particularly challenging. This talk reviews the intersection of the state of knowledge of GHG sources and sinks, focusing in particular on CO2 and CH4, and science and decision-making needs for this information. Different science and decision-making needs require differing levels of uncertainty. A number of high-priority needs (early detection of changes in the Earth system, projections of future climate, support of markets or regulations) often require a high degree of accuracy and/or precision. We will critically evaluate current U.S. planning to documents to infer current perceived needs for GHG source/sink quantification, attempting to translate these needs into quantitative uncertainty metrics. We will compare these perceived needs with the current state of the art of GHG source/sink quantification, including the apparent pattern of systematic differences between so-called "top down" and "bottom-up" flux estimates. This comparison will enable us to identify where needs can be readily satisfied, and where gaps in technology exist. Finally, we will examine what steps could be taken to close existing gaps.
Soloperto, Alessandro; Palazzolo, Gemma; Tsushima, Hanako; Chieregatti, Evelina; Vassalli, Massimo; Difato, Francesco
2016-01-01
Current optical approaches are progressing far beyond the scope of monitoring the structure and function of living matter, and they are becoming widely recognized as extremely precise, minimally-invasive, contact-free handling tools. Laser manipulation of living tissues, single cells, or even single-molecules is becoming a well-established methodology, thus founding the onset of new experimental paradigms and research fields. Indeed, a tightly focused pulsed laser source permits complex tasks such as developing engineered bioscaffolds, applying calibrated forces, transfecting, stimulating, or even ablating single cells with subcellular precision, and operating intracellular surgical protocols at the level of single organelles. In the present review, we report the state of the art of laser manipulation in neuroscience, to inspire future applications of light-assisted tools in nano-neurosurgery.
Soloperto, Alessandro; Palazzolo, Gemma; Tsushima, Hanako; Chieregatti, Evelina; Vassalli, Massimo; Difato, Francesco
2016-01-01
Current optical approaches are progressing far beyond the scope of monitoring the structure and function of living matter, and they are becoming widely recognized as extremely precise, minimally-invasive, contact-free handling tools. Laser manipulation of living tissues, single cells, or even single-molecules is becoming a well-established methodology, thus founding the onset of new experimental paradigms and research fields. Indeed, a tightly focused pulsed laser source permits complex tasks such as developing engineered bioscaffolds, applying calibrated forces, transfecting, stimulating, or even ablating single cells with subcellular precision, and operating intracellular surgical protocols at the level of single organelles. In the present review, we report the state of the art of laser manipulation in neuroscience, to inspire future applications of light-assisted tools in nano-neurosurgery. PMID:27013962
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
NASA Astrophysics Data System (ADS)
Milood Almelian, Mohamad; Mohd, Izzeldin I.; Asghaiyer Omran, Mohamed; Ullah Sheikh, Usman
2018-04-01
Power quality-related issues such as current and voltage distortions can adversely affect home and industrial appliances. Although several conventional techniques such as the use of passive and active filters have been developed to increase power quality standards, these methods have challenges and are inadequate due to the increasing number of applications. The Unified Power Quality Conditioner (UPQC) is a modern strategy towards correcting the imperfections of voltage and load current supply. A UPQC is a combination of both series and shunt active power filters in a back-to-back manner with a common DC link capacitor. The control of the voltage of the DC link capacitor is important in achieving a desired UPQC performance. In this paper, the UPQC with a Fuzzy logic controller (FLC) was used to precisely eliminate the imperfections of voltage and current harmonics. The results of the simulation studies using MATLAB/Simulink and Simpower system programming for R-L load associated through an uncontrolled bridge rectifier was used to assess the execution process. The UPQC with FLC was simulated for a system with distorted load current and a system with distorted source voltage and load current. The outcome of the comparison of %THD in the load current and source voltage before and after using UPQC for the two cases was presented.
NASA Astrophysics Data System (ADS)
Rogov, A.; Pepyolyshev, Yu.; Carta, M.; d'Angelo, A.
Scintillation detector (SD) is widely used in neutron and gamma-spectrometry in a count mode. The organic scintillators for the count mode of the detector operation are investigated rather well. Usually, they are applied for measurement of amplitude and time distributions of pulses caused by single interaction events of neutrons or gamma's with scintillator material. But in a large area of scientific research scintillation detectors can alternatively be used on a current mode by recording the average current from the detector. For example,the measurements of the neutron pulse shape at the pulsed reactors or another pulsed neutron sources. So as to get a rather large volume of experimental data at pulsed neutron sources, it is necessary to use the current mode detector for registration of fast neutrons. Many parameters of the SD are changed with a transition from an accounting mode to current one. For example, the detector efficiency is different in counting and current modes. Many effects connected with time accuracy become substantial. Besides, for the registration of solely fast neutrons, as must be in many measurements, in the mixed radiation field of the pulsed neutron sources, SD efficiency has to be determined with a gamma-radiation shield present. Here is no calculations or experimental data on SD current mode operation up to now. The response functions of the detectors can be either measured in high-precision reference fields or calculated by a computer simulation. We have used the MCNP code [1] and carried out some experiments for investigation of the plastic performances in a current mode. There are numerous programs performing simulating similar to the MCNP code. For example, for neutrons there are [2-4], for photons - [5-8]. However, all known codes to use (SCINFUL, NRESP4, SANDYL, EGS49) have more stringent restrictions on the source, geometry and detector characteristics. In MCNP code a lot of these restrictions are absent and you need only to write special additions for proton and electron recoil and transfer energy to light output. These code modifications allow taking into account all processes in organic scintillator influence the light yield.
Simulation of scattered fields: Some guidelines for the equivalent source method
NASA Astrophysics Data System (ADS)
Gounot, Yves J. R.; Musafir, Ricardo E.
2011-07-01
Three different approaches of the equivalent source method for simulating scattered fields are compared: two of them deal with monopole sets, the other with multipole expansions. In the first monopole approach, the sources have fixed positions given by specific rules, while in the second one (ESGA), the optimal positions are determined via a genetic algorithm. The 'pros and cons' of each of these approaches are discussed with the aim of providing practical guidelines for the user. It is shown that while both monopole techniques furnish quite good pressure field reconstructions with simple source arrangements, ESGA requires a number of monopoles significantly smaller and, with equal number of sources, yields a better precision. As for the multipole technique, the main advantage is that in principle any precision can be reached, provided the source order is sufficiently high. On the other hand, the results point out that the lack of rules for determining the proper multipole order necessary for a desired precision may constitute a handicap for the user.
On Detecting New Worlds: The Art of Doppler Spectroscopy with Iodine Cells
NASA Astrophysics Data System (ADS)
Wang, Sharon Xuesong
2016-08-01
The first discovery of an extra-solar planet (exoplanet) around a main-sequence star, 51 Peg b, discovered using Doppler spectroscopy, opened up the field of exoplanets. For more than a decade, the dominant way for finding exoplanets was using precise Doppler spectroscopy to measure the radial velocity (RV) changes of stars. Today, precise Doppler spectroscopy is still crucial for the discovery and characterization of exoplanets, and it has a great chance for finding the first rocky exoplanet in the Habitable Zone of its host star. However, such endeavor requires an exquisite precision of 10-50 cm/s while the current state of the art is 1 m/s. This thesis set out to improve the RV precision of two precise Doppler spectrometers on two 10-meter class telescopes: HET/HRS and Keck/HIRES. Both of these spectrometers use iodine cells as their wavelength calibration sources, and their spectral data are being analyzed via forward modeling to estimate stellar RVs. Neither HET/HRS or Keck/HIRES deliver an RV precision at the photon-limited level, meaning that there are additional RV systematic errors caused by instrumental changes or errors in the data analysis. HET/HRS has an RV precision of 3-5 m/s, while Keck/HIRES has about 1-2 m/s. I have found that the leading cause behind HET/HRS's "under-performance" in comparison to Keck/HIRES is temperature changes of the iodine gas cell (and thus an inaccurate iodine reference spectrum). Another reason is the insufficient modeling of the HET/HRS instrumental profile. While Keck/HIRES does not suffer from these problems, it also has several RV systematic error sources of con siderable sizes. The work in this thesis has revealed that the errors in Keck/HIRES's stellar reference spectrum add about 1 m/s to the error budget and are the major drivers behind the spurious RV signal at the period of a sidereal year and its harmonics. Telluric contamination and errors caused by the spectral fitting algorithm also contribute on the level of 20-50 cm/s. The strategies proposed and tested in this thesis will improve the RV precision of HET/HRS and Keck/HIRES, including their decade worth of archival data. This thesis also documents my work on characterizing exoplanet orbits using RV data and the discovery of HD 37605c. It concludes with a summary of major findings and an outline of future plans to use future precise Doppler spectrometers to move towards the goal of 10 cm/s and detecting Earth 2.0.
Precision Voltage Referencing Techniques in MOS Technology.
NASA Astrophysics Data System (ADS)
Song, Bang-Sup
With the increasing complexity of functions on a single MOS chip, precision analog cicuits implemented in the same technology are in great demand so as to be integrated together with digital circuits. The future development of MOS data acquisition systems will require precision on-chip MOS voltage references. This dissertation will probe two most promising configurations of on-chip voltage references both in NMOS and CMOS technologies. In NMOS, an ion-implantation effect on the temperature behavior of MOS devices is investigated to identify the fundamental limiting factors of a threshold voltage difference as an NMOS voltage source. For this kind of voltage reference, the temperature stability on the order of 20ppm/(DEGREES)C is achievable with a shallow single-threshold implant and a low-current, high-body bias operation. In CMOS, a monolithic prototype bandgap reference is designed, fabricated and tested which embodies a curvature compensation and exhibits a minimized sensitivity to the process parameter variation. Experimental results imply that an average temperature stability on the order of 10ppm/(DEGREES)C with a production spread of less than 10ppm/(DEGREES)C feasible over the commercial temperature range.
Current Status of the Beam Position Monitoring System at TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, C. H.; Hu, K. H.; Chen, Jenny
2006-11-20
The beam position monitoring system is an important part of a synchrotron light source that supports its routine operation and studies of beam physics. The Taiwan light source is equipped with 59 BPMs. Highly precise closed orbits are measured by multiplexing BPMs. Data are acquired using multi-channel 16-bit ADC modules. Orbit data are sampled every millisecond. Fast orbit data are shared in a reflective memory network to support fast orbit feedback. Averaged data were updated to control database at a rate of 10 Hz. A few new generation digital BPMs were tested to evaluate their performance and functionality. This reportmore » summarizes the system structure, the software environment and the preliminary beam test of the BPM system.« less
Thalamic inhibition: diverse sources, diverse scales
Halassa, Michael M.; Acsády, László
2016-01-01
The thalamus is the major source of cortical inputs shaping sensation, action and cognition. Thalamic circuits are targeted by two major inhibitory systems: the thalamic reticular nucleus (TRN) and extra-thalamic inhibitory (ETI) inputs. A unifying framework of how these systems operate is currently lacking. Here, we propose that TRN circuits are specialized to exert thalamic control at different spatiotemporal scales. Local inhibition of thalamic spike rates prevails during attentional selection whereas global inhibition more likely during sleep. In contrast, the ETI (arising from basal ganglia, zona incerta, anterior pretectum and pontine reticular formation) provides temporally-precise and focal inhibition, impacting spike timing. Together, these inhibitory systems allow graded control of thalamic output, enabling thalamocortical operations to dynamically match ongoing behavioral demands. PMID:27589879
Current Status of the Beam Position Monitoring System at TLS
NASA Astrophysics Data System (ADS)
Kuo, C. H.; Hu, K. H.; Chen, Jenny; Lee, Demi; Wang, C. J.; Hsu, S. Y.; Hsu, K. T.
2006-11-01
The beam position monitoring system is an important part of a synchrotron light source that supports its routine operation and studies of beam physics. The Taiwan light source is equipped with 59 BPMs. Highly precise closed orbits are measured by multiplexing BPMs. Data are acquired using multi-channel 16-bit ADC modules. Orbit data are sampled every millisecond. Fast orbit data are shared in a reflective memory network to support fast orbit feedback. Averaged data were updated to control database at a rate of 10 Hz. A few new generation digital BPMs were tested to evaluate their performance and functionality. This report summarizes the system structure, the software environment and the preliminary beam test of the BPM system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
An ultra-stable voltage source for precision Penning-trap experiments
NASA Astrophysics Data System (ADS)
Böhm, Ch.; Sturm, S.; Rischka, A.; Dörr, A.; Eliseev, S.; Goncharov, M.; Höcker, M.; Ketter, J.; Köhler, F.; Marschall, D.; Martin, J.; Obieglo, D.; Repp, J.; Roux, C.; Schüssler, R. X.; Steigleder, M.; Streubel, S.; Wagner, Th.; Westermann, J.; Wieder, V.; Zirpel, R.; Melcher, J.; Blaum, K.
2016-08-01
An ultra-stable and low-noise 25-channel voltage source providing 0 to -100 V has been developed. It will supply stable bias potentials for Penning-trap electrodes used in high-precision experiments. The voltage source generates all its supply voltages via a specially designed transformer. Each channel can be operated either in a precision mode or can be dynamically ramped. A reference module provides reference voltages for all the channels, each of which includes a low-noise amplifier to gain a factor of 10 in the output stage. A relative voltage stability of δV / V ≈ 2 ×10-8 has been demonstrated at -89 V within about 10 min.
Research on precise modeling of buildings based on multi-source data fusion of air to ground
NASA Astrophysics Data System (ADS)
Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong
2016-03-01
Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.
D'Avolio, Leonard W; Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D
2011-01-01
Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. A 'learn by example' approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.
Precision absolute-value amplifier for a precision voltmeter
Hearn, W.E.; Rondeau, D.J.
1982-10-19
Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resistor is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resistor. The output current through the load resistor is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resistor. A second gain determining resistor is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.
Precision absolute value amplifier for a precision voltmeter
Hearn, William E.; Rondeau, Donald J.
1985-01-01
Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resister is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resister. The output current through the load resister is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resister. A second gain determining resister is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.
Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L
2015-10-01
A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized.
INSPECTION SHOP: PLAN TO PROVIDE UNCERTAINTY ANALYSIS WITH MEASUREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nederbragt, W W
The LLNL inspection shop is chartered to make dimensional measurements of components for critical programmatic experiments. These measurements ensure that components are within tolerance and provide geometric details that can be used to further refine simulations. For these measurements to be useful, they must be significantly more accurate than the tolerances that are being checked. For example, if a part has a specified dimension of 100 millimeters and a tolerance of 1 millimeter, then the precision and/or accuracy of the measurement should be less than 1 millimeter. Using the ''10-to-1 gaugemaker's rule of thumb'', the desired precision of the measurementmore » should be less than 100 micrometers. Currently, the process for associating measurement uncertainty with data is not standardized, nor is the uncertainty based on a thorough uncertainty analysis. The goal of this project is to begin providing measurement uncertainty statements with critical measurements performed in the inspection shop. To accomplish this task, comprehensive knowledge about the underlying sources of uncertainty for measurement instruments need to be understood and quantified. Moreover, measurements of elemental uncertainties for each physical source need to be combined in a meaningful way to obtain an overall measurement uncertainty.« less
Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse
Moser, Tobias; Neef, Andreas; Khimich, Darina
2006-01-01
Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948
Precision and manufacturing at the Lawrence Livermore National Laboratory
NASA Technical Reports Server (NTRS)
Saito, Theodore T.; Wasley, Richard J.; Stowers, Irving F.; Donaldson, Robert R.; Thompson, Daniel C.
1994-01-01
Precision Engineering is one of the Lawrence Livermore National Laboratory's core strengths. This paper discusses the past and present current technology transfer efforts of LLNL's Precision Engineering program and the Livermore Center for Advanced Manufacturing and Productivity (LCAMP). More than a year ago the Precision Machine Commercialization project embodied several successful methods of transferring high technology from the National Laboratories to industry. Currently, LCAMP has already demonstrated successful technology transfer and is involved in a broad spectrum of current programs. In addition, this paper discusses other technologies ripe for future transition including the Large Optics Diamond Turning Machine.
Precision and manufacturing at the Lawrence Livermore National Laboratory
NASA Astrophysics Data System (ADS)
Saito, Theodore T.; Wasley, Richard J.; Stowers, Irving F.; Donaldson, Robert R.; Thompson, Daniel C.
1994-02-01
Precision Engineering is one of the Lawrence Livermore National Laboratory's core strengths. This paper discusses the past and present current technology transfer efforts of LLNL's Precision Engineering program and the Livermore Center for Advanced Manufacturing and Productivity (LCAMP). More than a year ago the Precision Machine Commercialization project embodied several successful methods of transferring high technology from the National Laboratories to industry. Currently, LCAMP has already demonstrated successful technology transfer and is involved in a broad spectrum of current programs. In addition, this paper discusses other technologies ripe for future transition including the Large Optics Diamond Turning Machine.
Precision control of multiple quantum cascade lasers for calibration systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taubman, Matthew S., E-mail: Matthew.Taubman@pnnl.gov; Myers, Tanya L.; Pratt, Richard M.
We present a precision, 1-A, digitally interfaced current controller for quantum cascade lasers, with demonstrated temperature coefficients for continuous and 40-kHz full-depth square-wave modulated operation, of 1–2 ppm/ °C and 15 ppm/ °C, respectively. High precision digital to analog converters (DACs) together with an ultra-precision voltage reference produce highly stable, precision voltages, which are selected by a multiplexer (MUX) chip to set output currents via a linear current regulator. The controller is operated in conjunction with a power multiplexing unit, allowing one of three lasers to be driven by the controller, while ensuring protection of controller and all lasers during operation, standby,more » and switching. Simple ASCII commands sent over a USB connection to a microprocessor located in the current controller operate both the controller (via the DACs and MUX chip) and the power multiplexer.« less
NASA Astrophysics Data System (ADS)
Clayton, Steven; Chupp, Tim; Cude-Woods, Christopher; Currie, Scott; Ito, Takeyasu; Liu, Chen-Yu; Long, Joshua; MacDonald, Stephen; Makela, Mark; O'Shaughnessy, Christopher; Plaster, Brad; Ramsey, John; Saunders, Andy; LANL nEDM Collaboration
2017-09-01
The Los Alamos National Laboratory ultracold neutron (UCN) source was recently upgraded for a factor of 5 improvement in stored density, providing the statistical precision needed for a room temperature neutron electric dipole moment measurement with sensitivity 3 ×10-27 e . cm, a factor 10 better than the limit set by the Sussex-RAL-ILL experiment. Here, we show results of a demonstration of Ramsey's separated oscillatory fields method on stored UCNs at the LANL UCN source and in a geometry relevant for a nEDM measurement. We argue a world-leading nEDM experiment could be performed at LANL with existing technology and a short lead time, providing a physics result with sensitivity intermediate between the current limit set by Sussex-RAL-ILL, and the anticipated limit from the complex, cryogenic nEDM experiment planned for the next decade at the ORNL Spallation Neutron Source (SNS-nEDM). This work was supported by the Los Alamos LDRD Program, Project 20140015DR.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Single-ping ADCP measurements in the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo
2016-04-01
In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.
OVERVIEW OF MONO-ENERGETIC GAMMA-RAY SOURCES & APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartemann, F V; Albert, F; Anderson, G G
2010-05-18
Recent progress in accelerator physics and laser technology have enabled the development of a new class of tunable gamma-ray light sources based on Compton scattering between a high-brightness, relativistic electron beam and a high intensity laser pulse produced via chirped-pulse amplification (CPA). A precision, tunable Mono-Energetic Gamma-ray (MEGa-ray) source driven by a compact, high-gradient X-band linac is currently under development and construction at LLNL. High-brightness, relativistic electron bunches produced by an X-band linac designed in collaboration with SLAC NAL will interact with a Joule-class, 10 ps, diode-pumped CPA laser pulse to generate tunable {gamma}-rays in the 0.5-2.5 MeV photon energymore » range via Compton scattering. This MEGa-ray source will be used to excite nuclear resonance fluorescence in various isotopes. Applications include homeland security, stockpile science and surveillance, nuclear fuel assay, and waste imaging and assay. The source design, key parameters, and current status are presented, along with important applications, including nuclear resonance fluorescence. In conclusion, we have optimized the design of a high brightness Compton scattering gamma-ray source, specifically designed for NRF applications. Two different parameters sets have been considered: one where the number of photons scattered in a single shot reaches approximately 7.5 x 10{sup 8}, with a focal spot size around 8 {micro}m; in the second set, the spectral brightness is optimized by using a 20 {micro}m spot size, with 0.2% relative bandwidth.« less
Detecting misinformation and knowledge conflicts in relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian
2014-06-01
Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).
Precision half-life measurement of 11C: The most precise mirror transition F t value
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.
2018-03-01
Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.
Guo, Feng; Zhou, Weijie; Li, Peng; Mao, Zhangming; Yennawar, Neela H; French, Jarrod B; Huang, Tony Jun
2015-06-01
Advances in modern X-ray sources and detector technology have made it possible for crystallographers to collect usable data on crystals of only a few micrometers or less in size. Despite these developments, sample handling techniques have significantly lagged behind and often prevent the full realization of current beamline capabilities. In order to address this shortcoming, a surface acoustic wave-based method for manipulating and patterning crystals is developed. This method, which does not damage the fragile protein crystals, can precisely manipulate and pattern micrometer and submicrometer-sized crystals for data collection and screening. The technique is robust, inexpensive, and easy to implement. This method not only promises to significantly increase efficiency and throughput of both conventional and serial crystallography experiments, but will also make it possible to collect data on samples that were previously intractable. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chen, Xiaojun; Xu, Lu; Wang, Wei; Li, Xing; Sun, Yi; Politis, Constantinus
2016-09-01
The surgical template is a guide aimed at directing the implant placement, tumor resection, osteotomy and bone repositioning. Using it, preoperative planning can be transferred to the actual surgical site, and the precision, safety and reliability of the surgery can be improved. However, the actual workflow of the surgical template design and manufacturing is quite complicated before the final clinical application. The major goal of the paper is to provide a comprehensive reference source of the current and future development of the template design and manufacturing for relevant researchers. Expert commentary: This paper aims to present a review of the necessary procedures in the template-guided surgery including the image processing, 3D visualization, preoperative planning, surgical guide design and manufacturing. In addition, the template-guided clinical applications for various kinds of surgeries are reviewed, and it demonstrated that the precision of the surgery has been improved compared with the non-guided operations.
Femtosecond timing distribution and control for next generation accelerators and light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li -Jin
Femtosecond Timing Distribution At LCLS Free-electron-lasers (FEL) have the capability of producing high photon flux from the IR to the hard x-ray wavelength range and to emit femtosecond and eventually even attosecond pulses. This makes them an ideal tool for fundamental as well as applied re-search. Timing precision at the Stanford Linear Coherent Light Source (LCLS) between the x-ray FEL (XFEL) and ultrafast optical lasers is currently no better than 100 fs RMS. Ideally this precision should be much better and could be limited only by the x-ray pulse duration, which can be as short as a few femtoseconds. Anmore » increasing variety of science problems involving electron and nuclear dynamics in chemical and material systems will become accessible as the timing improves to a few femtoseconds. Advanced methods of electron beam conditioning or pulse injection could allow the FEL to achieve pulse durations less than one femtosecond. The objective of the work described in this proposal is to set up an optical timing distribution system based on mode locked Erbium doped fiber lasers at LCLS facility to improve the timing precision in the facility and allow time stamping with a 10 fs precision. The primary commercial applications for optical timing distributions systems are seen in the worldwide accelerator facilities and next generation light sources community. It is reasonable to expect that at least three major XFELs will be built in the next decade. In addition there will be up to 10 smaller machines, such as FERMI in Italy and Maxlab in Sweden, plus the market for upgrading already existing facilities like Jefferson Lab. The total market is estimated to be on the order of a 100 Million US Dollars. The company owns the exclusive rights to the IP covering the technology enabling sub-10 fs synchronization systems. Testing this technology, which has set records in a lab environment, at LCLS, hence in a real world scenario, is an important corner stone of bringing the technology to market.« less
Global GNSS processing based on the raw observation approach
NASA Astrophysics Data System (ADS)
Strasser, Sebastian; Zehentner, Norbert; Mayer-Gürr, Torsten
2017-04-01
Many global navigation satellite system (GNSS) applications, e.g. Precise Point Positioning (PPP), require high-quality GNSS products, such as precise GNSS satellite orbits and clocks. These products are routinely determined by analysis centers of the International GNSS Service (IGS). The current processing methods of the analysis centers make use of the ionosphere-free linear combination to reduce the ionospheric influence. Some of the analysis centers also form observation differences, in general double-differences, to eliminate several additional error sources. The raw observation approach is a new GNSS processing approach that was developed at Graz University of Technology for kinematic orbit determination of low Earth orbit (LEO) satellites and subsequently adapted to global GNSS processing in general. This new approach offers some benefits compared to well-established approaches, such as a straightforward incorporation of new observables due to the avoidance of observation differences and linear combinations. This becomes especially important in view of the changing GNSS landscape with two new systems, the European system Galileo and the Chinese system BeiDou, currently in deployment. GNSS products generated at Graz University of Technology using the raw observation approach currently comprise precise GNSS satellite orbits and clocks, station positions and clocks, code and phase biases, and Earth rotation parameters. To evaluate the new approach, products generated using the Global Positioning System (GPS) constellation and observations from the global IGS station network are compared to those of the IGS analysis centers. The comparisons show that the products generated at Graz University of Technology are on a similar level of quality to the products determined by the IGS analysis centers. This confirms that the raw observation approach is applicable to global GNSS processing. Some areas requiring further work have been identified, enabling future improvements of the method.
Factors controlling precision and accuracy in isotope-ratio-monitoring mass spectrometry
NASA Technical Reports Server (NTRS)
Merritt, D. A.; Hayes, J. M.
1994-01-01
The performance of systems in which picomole quantities of sample are mixed with a carrier gas and passed through an isotope-ratio mass spectrometer system was examined experimentally and theoretically. Two different mass spectrometers were used, both having electron-impact ion sources and Faraday cup collector systems. One had an accelerating potential of 10kV and accepted 0.2 mL of He/min, producing, under those conditions, a maximum efficiency of 1 CO2 molecular ion collected per 700 molecules introduced. Comparable figures for the second instrument were 3 kV, 0.5 mL of He/min, and 14000 molecules/ion. Signal pathways were adjusted so that response times were <200 ms. Sample-related ion currents appeared as peaks with widths of 3-30 s. Isotope ratios were determined by comparison to signals produced by standard gases. In spite of rapid variations in signals, observed levels of performance were within a factor of 2 of shot-noise limits. For the 10-kV instrument, sample requirements for standard deviations of 0.1 and 0.5% were 45 and 1.7 pmol, respectively. Comparable requirements for the 3-kV instrument were 900 and 36 pmol. Drifts in instrumental characteristics were adequately neutralized when standards were observed at 20-min intervals. For the 10-kV instrument, computed isotopic compositions were independent of sample size and signal strength over the ranges examined. Nonlinearities of <0.04%/V were observed for the 3-kV system. Procedures for observation and subtraction of background ion currents were examined experimentally and theoretically. For sample/ background ratios varying from >10 to 0.3, precision is expected and observed to decrease approximately 2-fold and to depend only weakly on the precision with which background ion currents have been measured.
Determination of the mass of globular cluster X-ray sources
NASA Technical Reports Server (NTRS)
Grindlay, J. E.; Hertz, P.; Steiner, J. E.; Murray, S. S.; Lightman, A. P.
1984-01-01
The precise positions of the luminous X-ray sources in eight globular clusters have been measured with the Einstein X-Ray Observatory. When combined with similarly precise measurements of the dynamical centers and core radii of the globular clusters, the distribution of the X-ray source mass is determined to be in the range 0.9-1.9 solar mass. The X-ray source positions and the detailed optical studies indicate that (1) the sources are probably all of similar mass, (2) the gravitational potentials in these high-central density clusters are relatively smooth and isothermal, and (3) the X-ray sources are compact binaries and are probably formed by tidal capture.
Power supply system for negative ion source at IPR
NASA Astrophysics Data System (ADS)
Gahlaut, Agrajit; Sonara, Jashwant; Parmar, K. G.; Soni, Jignesh; Bandyopadhyay, M.; Singh, Mahendrajit; Bansal, Gourab; Pandya, Kaushal; Chakraborty, Arun
2010-02-01
The first step in the Indian program on negative ion beams is the setting up of Negative ion Experimental Assembly - RF based, where 100 kW of RF power shall be coupled to a plasma source producing plasma of density ~5 × 1012 cm-3, from which ~ 10 A of negative ion beam shall be produced and accelerated to 35 kV, through an electrostatic ion accelerator. The experimental system is modelled similar to the RF based negative ion source, BATMAN presently operating at IPP, Garching, Germany. The mechanical system for Negative Ion Source Assembly is close to the IPP source, remaining systems are designed and procured principally from indigenous sources, keeping the IPP configuration as a base line. High voltage (HV) and low voltage (LV) power supplies are two key constituents of the experimental setup. The HV power supplies for extraction and acceleration are rated for high voltage (~15 to 35kV), and high current (~ 15 to 35A). Other attributes are, fast rate of voltage rise (< 5ms), good regulation (< ±1%), low ripple (< ±2%), isolation (~50kV), low energy content (< 10J) and fast cut-off (< 100μs). The low voltage (LV) supplies required for biasing and providing heating power to the Cesium oven and the plasma grids; have attributes of low ripple, high stability, fast and precise regulation, programmability and remote operation. These power supplies are also equipped with over-voltage, over-current and current limit (CC Mode) protections. Fault diagnostics, to distinguish abnormal rise in currents (breakdown faults) with over-currents is enabled using fast response breakdown and over-current protection scheme. To restrict the fault energy deposited on the ion source, specially designed snubbers are implemented in each (extraction and acceleration) high voltage path to swap the surge energy. Moreover, the monitoring status and control signals from these power supplies are required to be electrically (~ 50kV) isolated from the system. The paper shall present the design basis, topology selection, manufacturing, testing, commissioning, integration and control strategy of these HVPS. A complete power interconnection scheme, which includes all protective devices and measuring devices, low & high voltage power supplies, monitoring and control signals etc. shall also be discussed. The paper also discusses the protocols involved in grounding and shielding, particularly in operating the system in RF environment.
Precision Experiments with Ultraslow Muons
NASA Astrophysics Data System (ADS)
Mills, Allen P.
A source of ~105 ultraslow muons (USM) per second (~0.2 eV energy spread and 40 mm source diameter) reported by Miyake et al., and the demonstration of 100 K thermal muonium in vacuum by Antognini, et al., suggest possibilities for substantial improvements in the experimental precisions of the muonium 1S-2S interval and the muon g-2 measurements.
Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F
2016-07-01
In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A review of computer-aided oral and maxillofacial surgery: planning, simulation and navigation.
Chen, Xiaojun; Xu, Lu; Sun, Yi; Politis, Constantinus
2016-11-01
Currently, oral and maxillofacial surgery (OMFS) still poses a significant challenge for surgeons due to the anatomic complexity and limited field of view of the oral cavity. With the great development of computer technologies, he computer-aided surgery has been widely used for minimizing the risks and improving the precision of surgery. Areas covered: The major goal of this paper is to provide a comprehensive reference source of current and future development of computer-aided OMFS including surgical planning, simulation and navigation for relevant researchers. Expert commentary: Compared with the traditional OMFS, computer-aided OMFS overcomes the disadvantage that the treatment on the region of anatomically complex maxillofacial depends almost exclusively on the experience of the surgeon.
Analysis of impulse signals with Hylaty ELF station
NASA Astrophysics Data System (ADS)
Kulak, A.; Mlynarczyk, J.; Ostrowski, M.; Kubisz, J.; Michalec, A.
2012-04-01
Lighting discharges generate electromagnetic field pulses that propagate in the Earth-ionosphere waveguide. The attenuation in the ELF range is so small that the pulses originating from strong atmospheric discharges can be observed even several thousand kilometers away from the individual discharge. The recorded waveform depends on the discharge process, the Earth-ionosphere waveguide properties on the source-receiver path, and the transfer function of the receiver. If the distance from the source is known, an inverse method can be used for reconstructing the current moment waveform and the charge moment of the discharge. In order to reconstruct the source parameters from the recorded signal a reliable model of the radio wave propagation in the Earth-ionosphere waveguide as well as practical signal processing techniques are necessary. We present two methods, both based on analytical formulas. The first method allows for fast calculation of the charge moment of relatively short atmospheric discharges. It is based on peak amplitude measurement of the recorded magnetic component of the ELF EM field and it takes into account the receiver characteristics. The second method, called "inverse channel method" allows reconstructing the complete current moment waveform of strong atmospheric discharges that exhibit the continuing current phase, such as Gigantic Jets and Sprites. The method makes it possible to fully remove from the observed waveform the distortions related to the receiver's impulse response as well as the influence of the Earth-ionosphere propagation channel. Our ELF station is equipped with two magnetic antennas for Bx and By components measurement in the 0.03 to 55 Hz frequency range. ELF Data recording is carried out since 1993, with continuous data acquisition since 2005. The station features low noise level and precise timing. It is battery powered and located in the sparsely populated area, far from major electric power lines, which results in high quality signal recordings and allows for precise calculations of the charge moments of upward discharges and strong cloud-to-ground discharges originating from distant sources. The same data is used for Schumann resonance observation. We demonstrate the use of our methods based on recent recordings from the Hylaty ELF station. We include examples of GJ (Gigantic Jet) and TGF (Terrestrial Gamma-ray Flash) related discharges.
Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)
NASA Astrophysics Data System (ADS)
Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.
2013-07-01
Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.
Field Trials of the Multi-Source Approach for Resistivity and Induced Polarization Data Acquisition
NASA Astrophysics Data System (ADS)
LaBrecque, D. J.; Morelli, G.; Fischanger, F.; Lamoureux, P.; Brigham, R.
2013-12-01
Implementing systems of distributed receivers and transmitters for resistivity and induced polarization data is an almost inevitable result of the availability of wireless data communication modules and GPS modules offering precise timing and instrument locations. Such systems have a number of advantages; for example, they can be deployed around obstacles such as rivers, canyons, or mountains which would be difficult with traditional 'hard-wired' systems. However, deploying a system of identical, small, battery powered, transceivers, each capable of injecting a known current and measuring the induced potential has an additional and less obvious advantage in that multiple units can inject current simultaneously. The original purpose for using multiple simultaneous current sources (multi-source) was to increase signal levels. In traditional systems, to double the received signal you inject twice the current which requires you to apply twice the voltage and thus four times the power. Alternatively, one approach to increasing signal levels for large-scale surveys collected using small, battery powered transceivers is it to allow multiple units to transmit in parallel. In theory, using four 400 watt transmitters on separate, parallel dipoles yields roughly the same signal as a single 6400 watt transmitter. Furthermore, implementing the multi-source approach creates the opportunity to apply more complex current flow patterns than simple, parallel dipoles. For a perfect, noise-free system, multi-sources adds no new information to a data set that contains a comprehensive set of data collected using single sources. However, for realistic, noisy systems, it appears that multi-source data can substantially impact survey results. In preliminary model studies, the multi-source data produced such startling improvements in subsurface images that even the authors questioned their veracity. Between December of 2012 and July of 2013, we completed multi-source surveys at five sites with depths of exploration ranging from 150 to 450 m. The sites included shallow geothermal sites near Reno Nevada, Pomarance Italy, and Volterra Italy; a mineral exploration site near Timmins Quebec; and a landslide investigation near Vajont Dam in northern Italy. These sites provided a series of challenges in survey design and deployment including some extremely difficult terrain and a broad range of background resistivity and induced values. Despite these challenges, comparison of multi-source results to resistivity and induced polarization data collection with more traditional methods support the thesis that the multi-source approach is capable of providing substantial improvements in both depth of penetration and resolution over conventional approaches.
Development of a novel gamma probe for detecting radiation direction
NASA Astrophysics Data System (ADS)
Pani, R.; Pellegrini, R.; Cinti, M. N.; Longo, M.; Donnarumma, R.; D'Alessio, A.; Borrazzo, C.; Pergola, A.; Ridolfi, S.; De Vincentis, G.
2016-01-01
Spatial localization of radioactive sources is currently a main issue interesting different fields, including nuclear industry, homeland security as well as medical imaging. It is currently achieved using different systems, but the development of technologies for detecting and characterizing radiation is becoming important especially in medical imaging. In this latter field, radiation detection probes have long been used to guide surgery, thanks to their ability to localize and quantify radiopharmaceutical uptake even deep in tissue. Radiolabelled colloid is injected into, or near to, the tumor and the surgeon uses a hand-held radiation detector, the gamma probe, to identify lymph nodes with radiopharmaceutical uptkake. The present work refers to a novel scintigraphic goniometric probe to identify gamma radiation and its direction. The probe incorporates several scintillation crystals joined together in a particular configuration to provide data related to the position of a gamma source. The main technical characteristics of the gamma locator prototype, i.e. sensitivity, spatial resolution and detection efficiency, are investigated. Moreover, the development of a specific procedure applied to the images permits to retrieve the source position with high precision with respect to the currently used gamma probes. The presented device shows a high sensitivity and efficiency to identify gamma radiation taking a short time (from 30 to 60 s). Even though it was designed for applications in radio-guided surgery, it could be used for other purposes, as for example homeland security.
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A comparison of cosmological models using strong gravitational lensing galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melia, Fulvio; Wei, Jun-Jie; Wu, Xue-Feng, E-mail: fmelia@email.arizona.edu, E-mail: jjwei@pmo.ac.cn, E-mail: xfwu@pmo.ac.cn, E-mail: fmelia@email.arizona.edu, E-mail: jjwei@pmo.ac.cn, E-mail: xfwu@pmo.ac.cn
2015-01-01
Strongly gravitationally lensed quasar-galaxy systems allow us to compare competing cosmologies as long as one can be reasonably sure of the mass distribution within the intervening lens. In this paper, we assemble a catalog of 69 such systems from the Sloan Lens ACS and Lens Structure and Dynamics surveys suitable for this analysis, and carry out a one-on-one comparison between the standard model, ΛCDM, and the R{sub h}=ct universe, which has thus far been favored by the application of model selection tools to other kinds of data. We find that both models account for the lens observations quite well, thoughmore » the precision of these measurements does not appear to be good enough to favor one model over the other. Part of the reason is the so-called bulge-halo conspiracy that, on average, results in a baryonic velocity dispersion within a fraction of the optical effective radius virtually identical to that expected for the whole luminous-dark matter distribution modeled as a singular isothermal ellipsoid, though with some scatter among individual sources. Future work can greatly improve the precision of these measurements by focusing on lensing systems with galaxies as close as possible to the background sources. Given the limitations of doing precision cosmological testing using the current sample, we also carry out Monte Carlo simulations based on the current lens measurements to estimate how large the source catalog would have to be in order to rule out either model at a ∼99.7% confidence level. We find that if the real cosmology is ΛCDM, a sample of ∼200 strong gravitational lenses would be sufficient to rule out R{sub h}=ct at this level of accuracy, while ∼300 strong gravitational lenses would be required to rule out ΛCDM if the real universe were instead R{sub h}=ct. The difference in required sample size reflects the greater number of free parameters available to fit the data with ΛCDM. We point out that, should the R{sub h}=ct universe eventually emerge as the correct cosmology, its lack of any free parameters for this kind of work will provide a remarkably powerful probe of the mass structure in lensing galaxies, and a means of better understanding the origin of the bulge-halo conspiracy.« less
Precision Spectrophotometric Calibration System for Dark Energy Instruments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schubnell, Michael S.
2015-06-30
For this research we build a precision calibration system and carried out measurements to demonstrate the precision that can be achieved with a high precision spectrometric calibration system. It was shown that the system is capable of providing a complete spectrophotometric calibration at the sub-pixel level. The calibration system uses a fast, high precision monochromator that can quickly and efficiently scan over an instrument’s entire spectral range with a spectral line width of less than 0.01 nm corresponding to a fraction of a pixel on the CCD. The system was extensively evaluated in the laboratory. Our research showed that amore » complete spectrophotometric calibration standard for spectroscopic survey instruments such as DESI is possible. The monochromator precision and repeatability to a small fraction of the DESI spectrograph LSF was demonstrated with re-initialization on every scan and thermal drift compensation by locking to multiple external line sources. A projector system that mimics telescope aperture for point source at infinity was demonstrated.« less
A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies
Puce, Aina; Hämäläinen, Matti S.
2017-01-01
Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed. PMID:28561761
Method and apparatus for providing a precise amount of gas at a precise humidity
Hallman, Jr., Russell L.; Truett, James C.
2001-02-06
A fluid transfer system includes a permeable fluid carrier, a constant temperature source of a first fluid, and a constant pressure source of a second fluid. The fluid carrier has a length, an inlet end, and an outlet end. The constant pressure source connects to the inlet end and communicates the second fluid into the fluid carrier, and the constant temperature source surrounds a least of portion of the length. A mixture of the first fluid and the second fluid exits via the outlet end A method of making a mixture of two fluids is also disclosed.
Double sided grating fabrication for high energy X-ray phase contrast imaging
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick; ...
2018-04-19
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
Double sided grating fabrication for high energy X-ray phase contrast imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
Llinás, Rodolfo R.; Ustinin, Mikhail N.; Rykunov, Stanislav D.; Boyko, Anna I.; Sychev, Vyacheslav V.; Walton, Kerry D.; Rabello, Guilherme M.; Garcia, John
2015-01-01
A new method for the analysis and localization of brain activity has been developed, based on multichannel magnetic field recordings, over minutes, superimposed on the MRI of the individual. Here, a high resolution Fourier Transform is obtained over the entire recording period, leading to a detailed multi-frequency spectrum. Further analysis implements a total decomposition of the frequency components into functionally invariant entities, each having an invariant field pattern localizable in recording space. The method, addressed as functional tomography, makes it possible to find the distribution of magnetic field sources in space. Here, the method is applied to the analysis of simulated data, to oscillating signals activating a physical current dipoles phantom, and to recordings of spontaneous brain activity in 10 healthy adults. In the analysis of simulated data, 61 dipoles are localized with 0.7 mm precision. Concerning the physical phantom the method is able to localize three simultaneously activated current dipoles with 1 mm precision. Spatial resolution 3 mm was attained when localizing spontaneous alpha rhythm activity in 10 healthy adults, where the alpha peak was specified for each subject individually. Co-registration of the functional tomograms with each subject's head MRI localized alpha range activity to the occipital and/or posterior parietal brain region. This is the first application of this new functional tomography to human brain activity. The method successfully provides an overall view of brain electrical activity, a detailed spectral description and, combined with MRI, the localization of sources in anatomical brain space. PMID:26528119
Continuous-waveform constant-current isolated physiological stimulator
NASA Astrophysics Data System (ADS)
Holcomb, Mark R.; Devine, Jack M.; Harder, Rene; Sidorov, Veniamin Y.
2012-04-01
We have developed an isolated continuous-waveform constant-current physiological stimulator that is powered and controlled by universal serial bus (USB) interface. The stimulator is composed of a custom printed circuit board (PCB), 16-MHz MSP430F2618 microcontroller with two integrated 12-bit digital to analog converters (DAC0, DAC1), high-speed H-Bridge, voltage-controlled current source (VCCS), isolated USB communication and power circuitry, two isolated transistor-transistor logic (TTL) inputs, and a serial 16 × 2 character liquid crystal display. The stimulators are designed to produce current stimuli in the range of ±15 mA indefinitely using a 20V source and to be used in ex vivo cardiac experiments, but they are suitable for use in a wide variety of research or student experiments that require precision control of continuous waveforms or synchronization with external events. The device was designed with customization in mind and has features that allow it to be integrated into current and future experimental setups. Dual TTL inputs allow replacement by two or more traditional stimulators in common experimental configurations. The MSP430 software is written in C++ and compiled with IAR Embedded Workbench 5.20.2. A control program written in C++ runs on a Windows personal computer and has a graphical user interface that allows the user to control all aspects of the device.
Garcia-Cossio, Eliana; Witkowski, Matthias; Robinson, Stephen E; Cohen, Leonardo G; Birbaumer, Niels; Soekadar, Surjo R
2016-10-15
Transcranial direct current stimulation (tDCS) can influence cognitive, affective or motor brain functions. Whereas previous imaging studies demonstrated widespread tDCS effects on brain metabolism, direct impact of tDCS on electric or magnetic source activity in task-related brain areas could not be confirmed due to the difficulty to record such activity simultaneously during tDCS. The aim of this proof-of-principal study was to demonstrate the feasibility of whole-head source localization and reconstruction of neuromagnetic brain activity during tDCS and to confirm the direct effect of tDCS on ongoing neuromagnetic activity in task-related brain areas. Here we show for the first time that tDCS has an immediate impact on slow cortical magnetic fields (SCF, 0-4Hz) of task-related areas that are identical with brain regions previously described in metabolic neuroimaging studies. 14 healthy volunteers performed a choice reaction time (RT) task while whole-head magnetoencephalography (MEG) was recorded. Task-related source-activity of SCFs was calculated using synthetic aperture magnetometry (SAM) in absence of stimulation and while anodal, cathodal or sham tDCS was delivered over the right primary motor cortex (M1). Source reconstruction revealed task-related SCF modulations in brain regions that precisely matched prior metabolic neuroimaging studies. Anodal and cathodal tDCS had a polarity-dependent impact on RT and SCF in primary sensorimotor and medial centro-parietal cortices. Combining tDCS and whole-head MEG is a powerful approach to investigate the direct effects of transcranial electric currents on ongoing neuromagnetic source activity, brain function and behavior. Copyright © 2015 Elsevier Inc. All rights reserved.
Garcia-Cossio, Eliana; Witkowski, Matthias; Robinson, Stephen E.; Cohen, Leonardo G.; Birbaumer, Niels; Soekadar, Surjo R.
2016-01-01
Transcranial direct current stimulation (tDCS) can influence cognitive, affective or motor brain functions. Whereas previous imaging studies demonstrated widespread tDCS effects on brain metabolism, direct impact of tDCS on electric or magnetic source activity in task-related brain areas could not be confirmed due to the difficulty to record such activity simultaneously during tDCS. The aim of this proof-of-principal study was to demonstrate the feasibility of whole-head source localization and reconstruction of neuromagnetic brain activity during tDCS and to confirm the direct effect of tDCS on ongoing neuromagnetic activity in task-related brain areas. Here we show for the first time that tDCS has an immediate impact on slow cortical magnetic fields (SCF, 0–4 Hz) of task-related areas that are identical with brain regions previously described in metabolic neuroimaging studies. 14 healthy volunteers performed a choice reaction time (RT) task while whole-head magnetoencephalography (MEG) was recorded. Task-related source-activity of SCFs was calculated using synthetic aperture magnetometry (SAM) in absence of stimulation and while anodal, cathodal or sham tDCS was delivered over the right primary motor cortex (M1). Source reconstruction revealed task-related SCF modulations in brain regions that precisely matched prior metabolic neuroimaging studies. Anodal and cathodal tDCS had a polarity-dependent impact on RT and SCF in primary sensorimotor and medial centro-parietal cortices. Combining tDCS and whole-head MEG is a powerful approach to investigate the direct effects of transcranial electric currents on ongoing neuromagnetic source activity, brain function and behavior. PMID:26455796
NASA Astrophysics Data System (ADS)
Fleck, Derek; Hoffnagle, John; Yiu, John; Chong, Johnston; Tan, Sze
2017-04-01
Methane source pinpointing and attribution is ever more important because of the vast network of natural gas distribution which has led to a very large emission sources. Ethane can be used as a tracer to distinguish gas sources between biogenic and natural gas. Having this measurement sensitive enough can even distinguish between gas distributors, or maturity through gas wetness. Here we present data obtained using a portable cavity ring-down spectrometer weighing less than 11 kg and consuming less than 35W that simultaneously measures methane and ethane with a raw 1-σ precision of 50ppb and 4.5ppb, respectively at 2 Hz. These precisions allow for a C2:C1 ratio 1-σ measurement of <0.1% above 10ppm in a single measurement. Utilizing a second onboard laser allows for a high precision methane only mode used for surveying and pinpointing. This mode measures at a rate faster than 4Hz with a 1-σ precision of <3ppb. Because methane seepages are highly variable due to air turbulence and mixing right above the ground, correlations in the variations in C2H6 and CH4 are used to derive a source C2:C1. Additional hardware is needed for steady state concentration measurements to reliably measure the C2:C1 ratio instantaneously. Source discrimination data of local leaks and methane sources using this analysis method are presented. Additionally, two-dimensional plume snapshots are constructed using an integrated onboard GPS to visualize horizontal plane gas propagation.
Absolute calorimetric calibration of low energy brachytherapy sources
NASA Astrophysics Data System (ADS)
Stump, Kurt E.
In the past decade there has been a dramatic increase in the use of permanent radioactive source implants in the treatment of prostate cancer. A small radioactive source encapsulated in a titanium shell is used in this type of treatment. The radioisotopes used are generally 125I or 103Pd. Both of these isotopes have relatively short half-lives, 59.4 days and 16.99 days, respectively, and have low-energy emissions and a low dose rate. These factors make these sources well suited for this application, but the calibration of these sources poses significant metrological challenges. The current standard calibration technique involves the measurement of ionization in air to determine the source air-kerma strength. While this has proved to be an improvement over previous techniques, the method has been shown to be metrologically impure and may not be the ideal means of calbrating these sources. Calorimetric methods have long been viewed to be the most fundamental means of determining source strength for a radiation source. This is because calorimetry provides a direct measurement of source energy. However, due to the low energy and low power of the sources described above, current calorimetric methods are inadequate. This thesis presents work oriented toward developing novel methods to provide direct and absolute measurements of source power for low-energy low dose rate brachytherapy sources. The method is the first use of an actively temperature-controlled radiation absorber using the electrical substitution method to determine total contained source power of these sources. The instrument described operates at cryogenic temperatures. The method employed provides a direct measurement of source power. The work presented here is focused upon building a metrological foundation upon which to establish power-based calibrations of clinical-strength sources. To that end instrument performance has been assessed for these source strengths. The intent is to establish the limits of the current instrument to direct further work in this field. It has been found that for sources with powers above approximately 2 muW the instrument is able to determine the source power in agreement to within less than 7% of what is expected based upon the current source strength standard. For lower power sources, the agreement is still within the uncertainty of the power measurement, but the calorimeter noise dominates. Thus, to provide absolute calibration of lower power sources additional measures must be taken. The conclusion of this thesis describes these measures and how they will improve the factors that limit the current instrument. The results of the work presented in this thesis establish the methodology of active radiometric calorimetey for the absolute calibration of radioactive sources. The method is an improvement over previous techniques in that there is no reliance upon the thermal properties of the materials used or the heat flow pathways on the source measurements. The initial work presented here will help to shape future refinements of this technique to allow lower power sources to be calibrated with high precision and high accuracy.
High precision locating control system based on VCM for Talbot lithography
NASA Astrophysics Data System (ADS)
Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song
2016-10-01
Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.
Characterization benches for neutrino telescope Optical Modules at the APC laboratory
NASA Astrophysics Data System (ADS)
Avgitas, Theodore; Creusot, Alexandre; Kouchner, Antoine
2016-04-01
As has been demonstrated by the first generation of neutrino telescopes Antares and IceCube, precise knowledge of the photon detection efficiency of optical modules is of fundamental importance for the understanding of the instrument and accurate event reconstruction. Dedicated test benches have been developed to measure all related quantities for the Digital Optical Modules of the KM3NeT neutrino telescope being currently deployed in the Mediterranean sea. The first bench is a black box with robotic arms equipped with a calibrated single photon source or laser which enable a precise mapping of the detection efficiency at arbitrary incident angles as well as precise measurements of the time delays induced by the photodetection chain. These measurement can be incorporated and compared to full GEANT MonteCarlo simulations of the optical modules. The second bench is a 2 m×2 m ×2 m water tank equipped with muon hodoscopes on top and bottom. It enables to study and measure the angular dependence of the DOM's detection efficiency of the Cherenkov light produced in water by relativistic muons, thus reproducing in situ detection conditions. We describe these two benches and present their first results and status.
The Multi-energy High precision Data Processor Based on AD7606
NASA Astrophysics Data System (ADS)
Zhao, Chen; Zhang, Yanchi; Xie, Da
2017-11-01
This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.
Compact Short-Pulsed Electron Linac Based Neutron Sources for Precise Nuclear Material Analysis
NASA Astrophysics Data System (ADS)
Uesaka, M.; Tagi, K.; Matsuyama, D.; Fujiwara, T.; Dobashi, K.; Yamamoto, M.; Harada, H.
2015-10-01
An X-band (11.424GHz) electron linac as a neutron source for nuclear data study for the melted fuel debris analysis and nuclear security in Fukushima is under development. Originally we developed the linac for Compton scattering X-ray source. Quantitative material analysis and forensics for nuclear security will start several years later after the safe settlement of the accident is established. For the purpose, we should now accumulate more precise nuclear data of U, Pu, etc., especially in epithermal (0.1-10 eV) neutrons. Therefore, we have decided to modify and install the linac in the core space of the experimental nuclear reactor "Yayoi" which is now under the decommission procedure. Due to the compactness of the X-band linac, an electron gun, accelerating tube and other components can be installed in a small space in the core. First we plan to perform the time-of-flight (TOF) transmission measurement for study of total cross sections of the nuclei for 0.1-10 eV energy neutrons. Therefore, if we adopt a TOF line of less than 10m, the o-pulse length of generated neutrons should be shorter than 100 ns. Electronenergy, o-pulse length, power, and neutron yield are ~30 MeV, 100 ns - 1 micros, ~0.4 kW, and ~1011 n/s (~103 n/cm2/s at samples), respectively. Optimization of the design of a neutron target (Ta, W, 238U), TOF line and neutron detector (Ce:LiCAF) of high sensitivity and fast response is underway. We are upgrading the electron gun and a buncher to realize higher current and beam power with a reasonable beam size in order to avoid damage of the neutron target. Although the neutron flux is limited in case of the X-band electron linac based source, we take advantage of its short pulse aspect and availability for nuclear data measurement with a short TOF system. First, we form a tentative configuration in the current experimental room for Compton scattering in 2014. Then, after the decommissioning has been finished, we move it to the "Yayoi" room and perform the operation and measurement.
Multicore fibre photonic lanterns for precision radial velocity Science
NASA Astrophysics Data System (ADS)
Gris-Sánchez, Itandehui; Haynes, Dionne M.; Ehrlich, Katjana; Haynes, Roger; Birks, Tim A.
2018-04-01
Incomplete fibre scrambling and fibre modal noise can degrade high-precision spectroscopic applications (typically high spectral resolution and high signal to noise). For example, it can be the dominating error source for exoplanet finding spectrographs, limiting the maximum measurement precision possible with such facilities. This limitation is exacerbated in the next generation of infra-red based systems, as the number of modes supported by the fibre scales inversely with the wavelength squared and more modes typically equates to better scrambling. Substantial effort has been made by major research groups in this area to improve the fibre link performance by employing non-circular fibres, double scramblers, fibre shakers, and fibre stretchers. We present an original design of a multicore fibre (MCF) terminated with multimode photonic lantern ports. It is designed to act as a relay fibre with the coupling efficiency of a multimode fibre (MMF), modal stability similar to a single-mode fibre and low loss in a wide range of wavelengths (380 nm to 860 nm). It provides phase and amplitude scrambling to achieve a stable near field and far-field output illumination pattern despite input coupling variations, and low modal noise for increased stability for high signal-to-noise applications such as precision radial velocity (PRV) science. Preliminary results are presented for a 511-core MCF and compared with current state of the art octagonal fibre.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rioja, M.; Dodson, R., E-mail: maria.rioja@icrar.org
2011-04-15
We describe a new method which achieves high-precision very long baseline interferometry (VLBI) astrometry in observations at millimeter (mm) wavelengths. It combines fast frequency-switching observations, to correct for the dominant non-dispersive tropospheric fluctuations, with slow source-switching observations, for the remaining ionospheric dispersive terms. We call this method source-frequency phase referencing. Provided that the switching cycles match the properties of the propagation media, one can recover the source astrometry. We present an analytic description of the two-step calibration strategy, along with an error analysis to characterize its performance. Also, we provide observational demonstrations of a successful application with observations using themore » Very Long Baseline Array at 86 GHz of the pairs of sources 3C274 and 3C273 and 1308+326 and 1308+328 under various conditions. We conclude that this method is widely applicable to mm-VLBI observations of many target sources, and unique in providing bona fide astrometrically registered images and high-precision relative astrometric measurements in mm-VLBI using existing and newly built instruments, including space VLBI.« less
NASA Astrophysics Data System (ADS)
Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian
Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.
A design of optical measurement laboratory for space-based illumination condition emulation
NASA Astrophysics Data System (ADS)
Xu, Rong; Zhao, Fei; Yang, Xin
2015-10-01
Space Objects Identification(SOI) and related technology have aroused wide attention from spacefaring nations due to the increasingly severe space environment. Multiple ground-based assets have been employed to acquire statistical survey data, detect faint debris, acquire photometric and spectroscopic data. Great efforts have been made to characterize different space objects using the statistical data acquired by telescopes. Furthermore, detailed laboratory data are needed to optimize the characterization of orbital debris and satellites via material composition and potential rotation axes, which calls for a high-precision and flexible optical measurement system. A typical method of taking optical measurements of a space object(or model) is to move light source and sensors through every possible orientation around it and keep the target still. However, moving equipments to accurate orientations in the air is difficult, especially for those large precise instruments sensitive to vibrations. Here, a rotation structure of "3+1" axes, with a three-axis turntable manipulating attitudes of the target and the sensor revolving around a single axis, is utilized to emulate every possible illumination condition in space, which can also avoid the inconvenience of moving large aparatus. Firstly, the source-target-sensor orientation of a real satellite was analyzed with vectors and coordinate systems built to illustrate their spatial relationship. By bending the Reference Coordinate Frame to the Phase Angle plane, the sensor only need to revolve around a single axis while the other three degrees of freedom(DOF) are associated with the Euler's angles of the satellite. Then according to practical engineering requirements, an integrated rotation system of four-axis structure is brought forward. Schemetic diagrams of the three-axis turntable and other equipments show an overview of the future laboratory layout. Finally, proposals on evironment arrangements, light source precautions and sensor selections are provided. Comparing to current methods, this design shows better effects on device simplication, automatic control and high-precision measurement.
NASA Astrophysics Data System (ADS)
Moreenthaler, George W.; Khatib, Nader; Kim, Byoungsoo
2003-08-01
For two decades now, the use of Remote Sensing/Precision Agriculture to improve farm yields while reducing the use of polluting chemicals and the limited water supply has been a major goal. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, farm efficiency must increase to meet future food requirements and to make farming a sustainable, profitable occupation. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The real goal is to increase farm profitability by identifying the additional treatments of chemicals and water that increase revenues more than they increase costs and do no exceed pollution standards (constrained optimization). Even though the economic and environmental benefits appear to be great, Remote Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now in place, but other needed factors have been missing. Commercial satellite systems can now image the Earth (multi-spectrally) with a resolution as fine as 2.5 m. Precision variable dispensing systems using GPS are now available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been developed. Personal computers and internet access are now in place in most farm homes and can provide a mechanism for periodically disseminating advice on what quantities of water and chemicals are needed in specific regions of each field. Several processes have been selected that fuse the disparate sources of information on the current and historic states of the crop and soil, and the remaining resource levels available, with the critical decisions that farmers are required to make. These are done in a way that is easy for the farmer to understand and profitable to implement. A "Constrained Optimization Algorithm" to further improve these processes will be presented. The objective function of the model will used to maximize the farmer's profit via increasing yields while decreasing environmental damage and decreasing applications of costly treatments. This model will incorporate information from Remote Sensing, from in-situ weather sources, from soil history, and from tacit farmer knowledge of the relative productivity of selected "Management Zones" of the farm, to provide incremental advice throughout the growing season on the optimum usage of water and chemical treatments.
Combination spindle-drive system for high precision machining
Gerth, Howard L.
1977-07-26
A combination spindle-drive is provided for fabrication of optical quality surface finishes. Both the spindle-and-drive utilize the spindle bearings for support, thereby removing the conventional drive-means bearings as a source of vibration. An airbearing spindle is modified to carry at the drive end a highly conductive cup-shaped rotor which is aligned with a stationary stator to produce torque in the cup-shaped rotor through the reaction of eddy currents induced in the rotor. This arrangement eliminates magnetic attraction forces and all force is in the form of torque on the cup-shaped rotor.
Area estimation using multiyear designs and partial crop identification
NASA Technical Reports Server (NTRS)
Sielken, R. L., Jr.
1984-01-01
Statistical procedures were developed for large area assessments using both satellite and conventional data. Crop acreages, other ground cover indices, and measures of change were the principal characteristics of interest. These characteristics are capable of being estimated from samples collected possibly from several sources at varying times, with different levels of identification. Multiyear analysis techniques were extended to include partially identified samples; the best current year sampling design corresponding to a given sampling history was determined; weights reflecting the precision or confidence in each observation were identified and utilized, and the variation in estimates incorporating partially identified samples were quantified.
Study of the choice of the decoupling layout for the ITER ICRH system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vervier, M., E-mail: michel.vervier@rma.ac.be; Messiaen, A.; Ongena, J.
10 decouplers are used to neutralize the mutual coupling effects and to control the current amplitude of the 24 straps array of the ITER ICRH antenna in the case of current drive phasing. In the case of heating phasing only 4 decouplers are active and the array current control needs to act on the ratio between the power delivered by the 4 generators. This ratio is very sensitive to the precise adjustment of the antenna array phasing. The maximum total radiated power capability is then limited when the power of one generator reaches its maximum value. With the addition ofmore » four switches all 10 installed decouplers are made active and can act on all mutual coupling effects with equal source power from the 4 generators. With four more switches the current drive phasing could work with a reduced poloidal phasing resulting in a 35% increase of its coupling to the plasma.« less
High-Precision Measurement of the Ne19 Half-Life and Implications for Right-Handed Weak Currents
NASA Astrophysics Data System (ADS)
Triambak, S.; Finlay, P.; Sumithrarachchi, C. S.; Hackman, G.; Ball, G. C.; Garrett, P. E.; Svensson, C. E.; Cross, D. S.; Garnsworthy, A. B.; Kshetri, R.; Orce, J. N.; Pearson, M. R.; Tardiff, E. R.; Al-Falou, H.; Austin, R. A. E.; Churchman, R.; Djongolov, M. K.; D'Entremont, R.; Kierans, C.; Milovanovic, L.; O'Hagan, S.; Reeve, S.; Sjue, S. K. L.; Williams, S. J.
2012-07-01
We report a precise determination of the Ne19 half-life to be T1/2=17.262±0.007s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.
High-precision measurement of the 19Ne half-life and implications for right-handed weak currents.
Triambak, S; Finlay, P; Sumithrarachchi, C S; Hackman, G; Ball, G C; Garrett, P E; Svensson, C E; Cross, D S; Garnsworthy, A B; Kshetri, R; Orce, J N; Pearson, M R; Tardiff, E R; Al-Falou, H; Austin, R A E; Churchman, R; Djongolov, M K; D'Entremont, R; Kierans, C; Milovanovic, L; O'Hagan, S; Reeve, S; Sjue, S K L; Williams, S J
2012-07-27
We report a precise determination of the (19)Ne half-life to be T(1/2)=17.262±0.007 s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.
Precision of working memory for visual motion sequences and transparent motion surfaces
Zokaei, Nahid; Gorgoraptis, Nikos; Bahrami, Bahador; Bays, Paul M; Husain, Masud
2012-01-01
Recent studies investigating working memory for location, colour and orientation support a dynamic resource model. We examined whether this might also apply to motion, using random dot kinematograms (RDKs) presented sequentially or simultaneously. Mean precision for motion direction declined as sequence length increased, with precision being lower for earlier RDKs. Two alternative models of working memory were compared specifically to distinguish between the contributions of different sources of error that corrupt memory (Zhang & Luck (2008) vs. Bays et al (2009)). The latter provided a significantly better fit for the data, revealing that decrease in memory precision for earlier items is explained by an increase in interference from other items in a sequence, rather than random guessing or a temporal decay of information. Misbinding feature attributes is an important source of error in working memory. Precision of memory for motion direction decreased when two RDKs were presented simultaneously as transparent surfaces, compared to sequential RDKs. However, precision was enhanced when one motion surface was prioritized, demonstrating that selective attention can improve recall precision. These results are consistent with a resource model that can be used as a general conceptual framework for understanding working memory across a range of visual features. PMID:22135378
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gair, Jonathan R.; Tang, Christopher; Volonteri, Marta
One of the sources of gravitational waves for the proposed space-based gravitational wave detector, the Laser Interferometer Space Antenna (LISA), are the inspirals of compact objects into supermassive black holes in the centers of galaxies--extreme-mass-ratio inspirals (EMRIs). Using LISA observations, we will be able to measure the parameters of each EMRI system detected to very high precision. However, the statistics of the set of EMRI events observed by LISA will be more important in constraining astrophysical models than extremely precise measurements for individual systems. The black holes to which LISA is most sensitive are in a mass range that ismore » difficult to probe using other techniques, so LISA provides an almost unique window onto these objects. In this paper we explore, using Bayesian techniques, the constraints that LISA EMRI observations can place on the mass function of black holes at low redshift. We describe a general framework for approaching inference of this type--using multiple observations in combination to constrain a parametrized source population. Assuming that the scaling of the EMRI rate with the black-hole mass is known and taking a black-hole distribution given by a simple power law, dn/dlnM=A{sub 0}(M/M{sub *}){sup {alpha}}{sub 0}, we find that LISA could measure the parameters to a precision of {Delta}(lnA{sub 0}){approx}0.08, and {Delta}({alpha}{sub 0}){approx}0.03 for a reference model that predicts {approx}1000 events. Even with as few as 10 events, LISA should constrain the slope to a precision {approx}0.3, which is the current level of observational uncertainty in the low-mass slope of the black-hole mass function. We also consider a model in which A{sub 0} and {alpha}{sub 0} evolve with redshift, but find that EMRI observations alone do not have much power to probe such an evolution.« less
Precision Control of Multiple Quantum Cascade Lasers for Calibration Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taubman, Matthew S.; Myers, Tanya L.; Pratt, Richard M.
We present a precision, digitally interfaced current controller for quantum cascade lasers, with demonstrated DC and modulated temperature coefficients of 1- 2 ppm/ºC and 15 ppm/ºC respectively. High linearity digital to analog converters (DACs) together with an ultra-precision voltage reference, produce highly stable, precision voltages. These are in turn selected by a low charge-injection multiplexer (MUX) chip, which are then used to set output currents via a linear current regulator. The controller is operated in conjunction with a power multiplexing unit, allowing one of three lasers to be driven by the controller while ensuring protection of controller and all lasersmore » during operation, standby and switching. Simple ASCII commands sent over a USB connection to a microprocessor located in the current controller operate both the controller (via the DACs and MUX chip) and the power multiplexer.« less
A novel method for assessing chronic cortisol concentrations in dogs using the nail as a source.
Mack, Z; Fokidis, H B
2017-04-01
Cortisol, a glucocorticoid secreted in response to stress, is used to assess adrenal function and mental health in clinical settings. Current methods assess cortisol sources that reflect short-term secretion that can vary with current stress state. Here, we present a novel method for the extraction and quantification of cortisol from the dog nail using solid phase extraction coupled to enzyme-linked immunosorbent assay. Validation experiments demonstrated accuracy (r = 0.836, P < 0.001) precision (15.1% coefficients of variation), and repeatability (14.4% coefficients of variation) with this method. Furthermore, nail cortisol concentrations were positively correlated to an established hair cortisol method (r = 0.736, P < 0.001). Nail cortisol concentrations did not differ with dog sex, breed, age, or weights; however, sample size limitations may preclude statistical significance. Nail cortisol may provide information on cortisol secretion integrated over the time corresponding to nail growth and may be useful as a tool for diagnosing stress and adrenal disorders in dogs. Copyright © 2016 Elsevier Inc. All rights reserved.
Bimodal exciton-plasmon light sources controlled by local charge carrier injection.
Merino, Pablo; Rosławska, Anna; Große, Christoph; Leon, Christopher C; Kuhnke, Klaus; Kern, Klaus
2018-05-01
Electrical charges can generate photon emission in nanoscale quantum systems by two independent mechanisms. First, radiative recombination of pairs of oppositely charged carriers generates sharp excitonic lines. Second, coupling between currents and collective charge oscillations results in broad plasmonic bands. Both luminescence modes can be simultaneously generated upon charge carrier injection into thin C 60 crystallites placed in the plasmonic nanocavity of a scanning tunneling microscope (STM). Using the sharp tip of the STM as a subnanometer-precise local electrode, we show that the two types of electroluminescence are induced by two separate charge transport channels. Holes injected into the valence band promote exciton generation, whereas electrons extracted from the conduction band cause plasmonic luminescence. The different dynamics of the two mechanisms permit controlling their relative contribution in the combined bimodal emission. Exciton recombination prevails for low charge injection rates, whereas plasmon decay outshines for high tunneling currents. The continuous transition between both regimes is described by a rate model characterizing emission dynamics on the nanoscale. Our work provides the basis for developing blended exciton-plasmon light sources with advanced functionalities.
Precise Ortho Imagery as the Source for Authoritative Airport Mapping
NASA Astrophysics Data System (ADS)
Howard, H.; Hummel, P.
2016-06-01
As the aviation industry moves from paper maps and charts to the digital cockpit and electronic flight bag, producers of these products need current and accurate data to ensure flight safety. FAA (Federal Aviation Administration) and ICAO (International Civil Aviation Organization) require certified suppliers to follow a defined protocol to produce authoritative map data for the aerodrome. Typical airport maps have been produced to meet 5 m accuracy requirements. The new digital aviation world is moving to 1 m accuracy maps to provide better situational awareness on the aerodrome. The commercial availability of 0.5 m satellite imagery combined with accurate ground control is enabling the production of avionics certified .85 m orthophotos of airports around the globe. CompassData maintains an archive of over 400+ airports as source data to support producers of 1 m certified Aerodrome Mapping Database (AMDB) critical to flight safety and automated situational awareness. CompassData is a DO200A certified supplier of authoritative orthoimagery and attendees will learn how to utilize current airport imagery to build digital aviation mapping products.
NASA Astrophysics Data System (ADS)
Jiang, Ying; Zeng, Jie; Liang, Dakai; Ni, Xiaoyu; Luo, Wenyong
2013-06-01
The fibers aligning is very important in fusion splicing process. The core of polarization maintaining photonic crystal fiber(PM-PCF) can not be seen in the splicer due to microhole structure of its cross-section. So it is difficult to align precisely PM-PCF and conventional single-mode fiber(SMF).We demonstrate a novel method for aligning precisely PM-PCF and conventional SMF by online spectrum monitoring. Firstly, the light source of halogen lamp is connected to one end face of conventional SMF.Then align roughly one end face of PM-PCF and the other end face of conventional SMF by observing visible light in the other end face of PM-PCF. If there exists visible light, they are believed to align roughly. The other end face of PM-PCF and one end face of the other conventional SMF are aligned precisely in the other splicer by online spectrum monitoring. Now the light source of halogen lamp is changed into a broadband light source with 52nm wavelength range.The other end face of the other conventional SMF is connected to an optical spectrum analyzer.They are translationally and rotationally adjusted in the splicer by monitoring spectrum. When the transmission spectrum power is maximum, the aligning is precise.
A Low-cost Environmental Control System for Precise Radial Velocity Spectrometers
NASA Astrophysics Data System (ADS)
Sliski, David H.; Blake, Cullen H.; Halverson, Samuel
2017-12-01
We present an environmental control system (ECS) designed to achieve milliKelvin (mK) level temperature stability for small-scale astronomical instruments. This ECS is inexpensive and is primarily built from commercially available components. The primary application for our ECS is the high-precision Doppler spectrometer MINERVA-Red, where the thermal variations of the optical components within the instrument represent a major source of systematic error. We demonstrate ±2 mK temperature stability within a 0.5 m3 thermal enclosure using resistive heaters in conjunction with a commercially available PID controller and off-the-shelf thermal sensors. The enclosure is maintained above ambient temperature, enabling rapid cooling through heat dissipation into the surrounding environment. We demonstrate peak-to-valley (PV) temperature stability of better than 5 mK within the MINERVA-Red vacuum chamber, which is located inside the thermal enclosure, despite large temperature swings in the ambient laboratory environment. During periods of stable laboratory conditions, the PV variations within the vacuum chamber are less than 3 mK. This temperature stability is comparable to the best stability demonstrated for Doppler spectrometers currently achieving m s-1 radial velocity precision. We discuss the challenges of using commercially available thermoelectrically cooled CCD cameras in a temperature-stabilized environment, and demonstrate that the effects of variable heat output from the CCD camera body can be mitigated using PID-controlled chilled water systems. The ECS presented here could potentially provide the stable operating environment required for future compact “astrophotonic” precise radial velocity (PRV) spectrometers to achieve high Doppler measurement precision with a modest budget.
Airborne Precision Spacing: A Trajectory-based Approach to Improve Terminal Area Operations
NASA Technical Reports Server (NTRS)
Barmore, Bryan
2006-01-01
Airborne Precision Spacing has been developed by the National Aeronautics and Space Administration (NASA) over the past seven years as an attempt to benefit from the capabilities of the flight deck to precisely space their aircraft relative to another aircraft. This development has leveraged decades of work on improving terminal area operations, especially the arrival phase. With APS operations, the air traffic controller instructs the participating aircraft to achieve an assigned inter-arrival spacing interval at the runway threshold, relative to another aircraft. The flight crew then uses airborne automation to manage the aircraft s speed to achieve the goal. The spacing tool is designed to keep the speed within acceptable operational limits, promote system-wide stability, and meet the assigned goal. This reallocation of tasks with the controller issuing strategic goals and the flight crew managing the tactical achievement of those goals has been shown to be feasible through simulation and flight test. A precision of plus or minus 2-3 seconds is generally achievable. Simulations of long strings of arriving traffic show no signs of instabilities or compression waves. Subject pilots have rated the workload to be similar to current-day operations and eye-tracking data substantiate this result. This paper will present a high-level review of research results over the past seven years from a variety of tests and experiments. The results will focus on the precision and accuracy achievable, flow stability and some major sources of uncertainty. The paper also includes a summary of the flight crew s procedures and interface and a brief concept overview.
Evaluating measurements of carbon dioxide emissions using a precision source--A natural gas burner.
Bryant, Rodney; Bundy, Matthew; Zong, Ruowen
2015-07-01
A natural gas burner has been used as a precise and accurate source for generating large quantities of carbon dioxide (CO2) to evaluate emissions measurements at near-industrial scale. Two methods for determining carbon dioxide emissions from stationary sources are considered here: predicting emissions based on fuel consumption measurements-predicted emissions measurements, and direct measurement of emissions quantities in the flue gas-direct emissions measurements. Uncertainty for the predicted emissions measurement was estimated at less than 1%. Uncertainty estimates for the direct emissions measurement of carbon dioxide were on the order of ±4%. The relative difference between the direct emissions measurements and the predicted emissions measurements was within the range of the measurement uncertainty, therefore demonstrating good agreement. The study demonstrates how independent methods are used to validate source emissions measurements, while also demonstrating how a fire research facility can be used as a precision test-bed to evaluate and improve carbon dioxide emissions measurements from stationary sources. Fossil-fuel-consuming stationary sources such as electric power plants and industrial facilities account for more than half of the CO2 emissions in the United States. Therefore, accurate emissions measurements from these sources are critical for evaluating efforts to reduce greenhouse gas emissions. This study demonstrates how a surrogate for a stationary source, a fire research facility, can be used to evaluate the accuracy of measurements of CO2 emissions.
Analyzing γ rays of the Galactic Center with deep learning
NASA Astrophysics Data System (ADS)
Caron, Sascha; Gómez-Vargas, Germán A.; Hendriks, Luc; Ruiz de Austri, Roberto
2018-05-01
We present the application of convolutional neural networks to a particular problem in gamma ray astronomy. Explicitly, we use this method to investigate the origin of an excess emission of GeV γ rays in the direction of the Galactic Center, reported by several groups by analyzing Fermi-LAT data. Interpretations of this excess include γ rays created by the annihilation of dark matter particles and γ rays originating from a collection of unresolved point sources, such as millisecond pulsars. We train and test convolutional neural networks with simulated Fermi-LAT images based on point and diffuse emission models of the Galactic Center tuned to measured γ ray data. Our new method allows precise measurements of the contribution and properties of an unresolved population of γ ray point sources in the interstellar diffuse emission model. The current model predicts the fraction of unresolved point sources with an error of up to 10% and this is expected to decrease with future work.
Hart, Reece K; Rico, Rudolph; Hare, Emily; Garcia, John; Westbrook, Jody; Fusaro, Vincent A
2015-01-15
Biological sequence variants are commonly represented in scientific literature, clinical reports and databases of variation using the mutation nomenclature guidelines endorsed by the Human Genome Variation Society (HGVS). Despite the widespread use of the standard, no freely available and comprehensive programming libraries are available. Here we report an open-source and easy-to-use Python library that facilitates the parsing, manipulation, formatting and validation of variants according to the HGVS specification. The current implementation focuses on the subset of the HGVS recommendations that precisely describe sequence-level variation relevant to the application of high-throughput sequencing to clinical diagnostics. The package is released under the Apache 2.0 open-source license. Source code, documentation and issue tracking are available at http://bitbucket.org/hgvs/hgvs/. Python packages are available at PyPI (https://pypi.python.org/pypi/hgvs). Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Hart, Reece K.; Rico, Rudolph; Hare, Emily; Garcia, John; Westbrook, Jody; Fusaro, Vincent A.
2015-01-01
Summary: Biological sequence variants are commonly represented in scientific literature, clinical reports and databases of variation using the mutation nomenclature guidelines endorsed by the Human Genome Variation Society (HGVS). Despite the widespread use of the standard, no freely available and comprehensive programming libraries are available. Here we report an open-source and easy-to-use Python library that facilitates the parsing, manipulation, formatting and validation of variants according to the HGVS specification. The current implementation focuses on the subset of the HGVS recommendations that precisely describe sequence-level variation relevant to the application of high-throughput sequencing to clinical diagnostics. Availability and implementation: The package is released under the Apache 2.0 open-source license. Source code, documentation and issue tracking are available at http://bitbucket.org/hgvs/hgvs/. Python packages are available at PyPI (https://pypi.python.org/pypi/hgvs). Contact: reecehart@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25273102
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
Principles of Precision Spectrophotometry: An Advanced Undergraduate Experiment
ERIC Educational Resources Information Center
Billmeyer, Fred W., Jr.
1974-01-01
Describes an experiment designed to familiarize students with the operation of a precision spectrophotometer, the effects of changes in operating variables, and the characteristics of such components as sources and detectors. (SLH)
Symons, William O.; Sumner, Esther J.; Paull, Charles K.; Cartigny, Matthieu J.B.; Xu, Jingping; Maier, Katherine L.; Lorenson, Thomas; Talling, Peter J.
2017-01-01
Submarine turbidity currents create some of the largest sediment accumulations on Earth, yet there are few direct measurements of these flows. Instead, most of our understanding of turbidity currents results from analyzing their deposits in the sedimentary record. However, the lack of direct flow measurements means that there is considerable debate regarding how to interpret flow properties from ancient deposits. This novel study combines detailed flow monitoring with unusually precisely located cores at different heights, and multiple locations, within the Monterey submarine canyon, offshore California, USA. Dating demonstrates that the cores include the time interval that flows were monitored in the canyon, albeit individual layers cannot be tied to specific flows. There is good correlation between grain sizes collected by traps within the flow and grain sizes measured in cores from similar heights on the canyon walls. Synthesis of flow and deposit data suggests that turbidity currents sourced from the upper reaches of Monterey Canyon comprise three flow phases. Initially, a thin (38–50 m) powerful flow in the upper canyon can transport, tilt, and break the most proximal moorings and deposit chaotic sands and gravel on the canyon floor. The initially thin flow front then thickens and deposits interbedded sands and silty muds on the canyon walls as much as 62 m above the canyon floor. Finally, the flow thickens along its length, thus lofting silty mud and depositing it at greater altitudes than the previous deposits and in excess of 70 m altitude.
Ogden R. Lindsley and the historical development of precision teaching
Potts, Lisa; Eshleman, John W.; Cooper, John O.
1993-01-01
This paper presents the historical developments of precision teaching, a technological offshoot of radical behaviorism and free-operant conditioning. The sequence progresses from the scientific precursors of precision teaching and the beginnings of precision teaching to principal developments since 1965. Information about the persons, events, and accomplishments presented in this chronology was compiled in several ways. Journals, books, and conference presentations provided the essential information. The most important source for this account was Ogden Lindsley himself, because Lindsley and his students established the basic practices that define precision teaching. PMID:22478145
LandingNav: a precision autonomous landing sensor for robotic platforms on planetary bodies
NASA Astrophysics Data System (ADS)
Katake, Anup; Bruccoleri, Chrisitian; Singla, Puneet; Junkins, John L.
2010-01-01
Increased interest in the exploration of extra terrestrial planetary bodies calls for an increase in the number of spacecraft landing on remote planetary surfaces. Currently, imaging and radar based surveys are used to determine regions of interest and a safe landing zone. The purpose of this paper is to introduce LandingNav, a sensor system solution for autonomous landing on planetary bodies that enables landing on unknown terrain. LandingNav is based on a novel multiple field of view imaging system that leverages the integration of different state of the art technologies for feature detection, tracking, and 3D dense stereo map creation. In this paper we present the test flight results of the LandingNav system prototype. Sources of errors due to hardware limitations and processing algorithms were identified and will be discussed. This paper also shows that addressing the issues identified during the post-flight test data analysis will reduce the error down to 1-2%, thus providing for a high precision 3D range map sensor system.
Sokol, Serguei; Millard, Pierre; Portais, Jean-Charles
2012-03-01
The problem of stationary metabolic flux analysis based on isotope labelling experiments first appeared in the early 1950s and was basically solved in early 2000s. Several algorithms and software packages are available for this problem. However, the generic stochastic algorithms (simulated annealing or evolution algorithms) currently used in these software require a lot of time to achieve acceptable precision. For deterministic algorithms, a common drawback is the lack of convergence stability for ill-conditioned systems or when started from a random point. In this article, we present a new deterministic algorithm with significantly increased numerical stability and accuracy of flux estimation compared with commonly used algorithms. It requires relatively short CPU time (from several seconds to several minutes with a standard PC architecture) to estimate fluxes in the central carbon metabolism network of Escherichia coli. The software package influx_s implementing this algorithm is distributed under an OpenSource licence at http://metasys.insa-toulouse.fr/software/influx/. Supplementary data are available at Bioinformatics online.
Capabilities and prospects of the East Asia Very Long Baseline Interferometry Network
NASA Astrophysics Data System (ADS)
An, T.; Sohn, B. W.; Imai, H.
2018-02-01
The very long baseline interferometry (VLBI) technique offers angular resolutions superior to any other instruments at other wavelengths, enabling unique science applications of high-resolution imaging of radio sources and high-precision astrometry. The East Asia VLBI Network (EAVN) is a collaborative effort in the East Asian region. The EAVN currently consists of 21 telescopes with diverse equipment configurations and frequency setups, allowing flexible subarrays for specific science projects. The EAVN provides the highest resolution of 0.5 mas at 22 GHz, allowing the fine imaging of jets in active galactic nuclei, high-accuracy astrometry of masers and pulsars, and precise spacecraft positioning. The soon-to-be-operational Five-hundred-meter Aperture Spherical radio Telescope (FAST) will open a new era for the EAVN. This state-of-the-art VLBI array also provides easy access to and crucial training for the burgeoning Asian astronomical community. This Perspective summarizes the status, capabilities and prospects of the EAVN.
Selective-area growth and controlled substrate coupling of transition metal dichalcogenides
NASA Astrophysics Data System (ADS)
Bersch, Brian M.; Eichfeld, Sarah M.; Lin, Yu-Chuan; Zhang, Kehao; Bhimanapati, Ganesh R.; Piasecki, Aleksander F.; Labella, Michael, III; Robinson, Joshua A.
2017-06-01
Developing a means for true bottom-up, selective-area growth of two-dimensional (2D) materials on device-ready substrates will enable synthesis in regions only where they are needed. Here, we demonstrate seed-free, site-specific nucleation of transition metal dichalcogenides (TMDs) with precise control over lateral growth by utilizing an ultra-thin polymeric surface functionalization capable of precluding nucleation and growth. This polymer functional layer (PFL) is derived from conventional photoresists and lithographic processing, and is compatible with multiple growth techniques, precursors (metal organics, solid-source) and TMDs. Additionally, we demonstrate that the substrate can play a major role in TMD transport properties. With proper TMD/substrate decoupling, top-gated field-effect transistors (FETs) fabricated with selectively-grown monolayer MoS2 channels are competitive with current reported MoS2 FETs. The work presented here demonstrates that substrate surface engineering is key to realizing precisely located and geometrically-defined 2D layers via unseeded chemical vapor deposition techniques.
SATELLITE-MOUNTED LIGHT SOURCES AS PHOTOMETRIC CALIBRATION STANDARDS FOR GROUND-BASED TELESCOPES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J., E-mail: jalbert@uvic.ca
2012-01-15
A significant and growing portion of systematic error on a number of fundamental parameters in astrophysics and cosmology is due to uncertainties from absolute photometric and flux standards. A path toward achieving major reduction in such uncertainties may be provided by satellite-mounted light sources, resulting in improvement in the ability to precisely characterize atmospheric extinction, and thus helping to usher in the coming generation of precision results in astronomy. Using a campaign of observations of the 532 nm pulsed laser aboard the CALIPSO satellite, collected using a portable network of cameras and photodiodes, we obtain initial measurements of atmospheric extinction,more » which can apparently be greatly improved by further data of this type. For a future satellite-mounted precision light source, a high-altitude balloon platform under development (together with colleagues) can provide testing as well as observational data for calibration of atmospheric uncertainties.« less
Variable ratio beam splitter for laser applications
NASA Technical Reports Server (NTRS)
Brown, R. M.
1971-01-01
Beam splitter employing birefringent optics provides either widely different or precisely equal beam ratios, it can be used with laser light source systems for interferometry of lossy media, holography, scattering measurements, and precise beam ratio applications.
Constant-current control method of multi-function electromagnetic transmitter.
Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun
2015-02-01
Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.
Constant-current control method of multi-function electromagnetic transmitter
NASA Astrophysics Data System (ADS)
Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun
2015-02-01
Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.
Defining Uncertainty and Error in Planktic Foraminiferal Oxygen Isotope Measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C.
2016-12-01
Foraminifera are the backbone of paleoceanography, and planktic foraminifera are one of the leading tools for reconstructing water column structure. Currently, there are unconstrained variables when dealing with the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate the precision and accuracy of oxygen isotope measurements. FIRM produces synthetic isotope data using parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects. Reproducibility is then tested using Monte Carlo simulations. The results from a series of experiments show that reproducibility is largely controlled by the number of individuals in each measurement, but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. Currently FIRM is a tool to estimate isotopic error values best employed in the Holocene. It is also a tool to explore the impact of myriad factors on the fidelity of paleoceanographic records. FIRM was constructed in the open-source computing environment R and is freely available via GitHub. We invite modification and expansion, and have planned inclusions for benthic foram reproducibility and stratigraphic uncertainty.
Canadian Penning Trap Mass Measurements using a Position Sensitive MCP
NASA Astrophysics Data System (ADS)
Kuta, Trenton; Aprahamian, Ani; Marley, Scott; Nystrom, Andrew; Clark, Jason; Perez Galvan, Adrian; Hirsh, Tsviki; Savard, Guy; Orford, Rodney; Morgan, Graeme
2015-10-01
The primary focus of the Canadian Penning Trap (CPT) located at Argonne National Lab is to determine the masses of various isotopes produced in the spontaneous fission of Californium. Currently, the CPT is operating in conjunction with CARIBU at the ATLAS facility in an attempt to measure neutron-rich nuclei produced by a 1.5 Curie source of Californium 252. The masses of nuclei produced in fission is accomplished by measuring the cyclotron frequency of the isotopes circling within the trap. This frequency is determined by a position sensitive MCP, which records the relative position of the isotope in the trap at different times. Using these position changes over time in connection with a center spot, angles between these positions are calculated and used to determine the frequency. Most of the work currently being conducted on the CPT is focused on the precision of these frequency measurements. The use of traps has revolutionized the measurements of nuclear masses to very high precision. The optimization methods employed here include focusing the beam in order to reduce the spread on the position of the isotope as well as the tuning of the MR-ToF, a mass separator that is intended on removing contaminants in the beam. This work was supported by the nuclear Grant PHY-1419765 for the University of Notre Dame.
NASA Astrophysics Data System (ADS)
Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.
2014-12-01
Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.
NASA Astrophysics Data System (ADS)
Scordo, A.; Curceanu, C.; Miliucci, M.; Shi, H.; Sirghi, F.; Zmeskal, J.
2018-04-01
Bragg spectroscopy is one of the best established experimental methods for high energy resolution X-ray measurements and has been widely used in several fields, going from fundamental physics to quantum mechanics tests, synchrotron radiation and X-FEL applications, astronomy, medicine and industry. However, this technique is limited to the measurement of photons produced from well collimated or point-like sources and becomes quite inefficient for photons coming from extended and diffused sources like those, for example, emitted in the exotic atoms radiative transitions. The VOXES project's goal is to realise a prototype of a high resolution and high precision X-ray spectrometer, using Highly Annealed Pyrolitic Graphite (HAPG) crystals in the Von Hamos configuration, working also for extended sources. The aim is to deliver a cost effective system having an energy resolution at the level of eV for X-ray energies from about 2 keV up to tens of keV, able to perform sub-eV precision measurements with non point-like sources. In this paper, the working principle of VOXES, together with first results, are presented.
Wang, Zhaohui; Witte, Russell S.
2015-01-01
Ultrasound current source density imaging (UCSDI), which has application to the heart and brain, exploits the acoustoelectric (AE) effect and Ohm's law to detect and map an electrical current distribution. In this study, we describe 4-D UCSDI simulations of a dipole field for comparison and validation with bench-top experiments. The simulations consider the properties of the ultrasound pulse as it passes through a conductive medium, the electric field of the injected dipole, and the lead field of the detectors. In the simulation, the lead fields of detectors and electric field of the dipole were calculated by the finite element (FE) method, and the convolution and correlation in the computation of the detected AE voltage signal were accelerated using 3-D fast Fourier transforms. In the bench-top experiment, an electric dipole was produced in a bath of 0.9% NaCl solution containing two electrodes, which injected an ac pulse (200 Hz, 3 cycles) ranging from 0 to 140 mA. Stimulating and recording electrodes were placed in a custom electrode chamber made on a rapid prototype printer. Each electrode could be positioned anywhere on an x-y grid (5 mm spacing) and individually adjusted in the depth direction for precise control of the geometry of the current sources and detecting electrodes. A 1-MHz ultrasound beam was pulsed and focused through a plastic film to modulate the current distribution inside the saline-filled tank. AE signals were simultaneously detected at a sampling frequency of 15 MHz on multiple recording electrodes. A single recording electrode is sufficient to form volume images of the current flow and electric potentials. The AE potential is sensitive to the distance from the dipole, but is less sensitive to the angle between the detector and the dipole. Multi-channel UCSDI potentially improves 4-D mapping of bioelectric sources in the body at high spatial resolution, which is especially important for diagnosing and guiding treatment of cardiac and neurologic disorders, including arrhythmia and epilepsy. PMID:24569247
Proceedings of the Workshop on Improvements to Photometry
NASA Technical Reports Server (NTRS)
Borucki, W. J. (Editor); Young, A. T. (Editor)
1984-01-01
The purposes of the workshop were to determine what astronomical problems would benefit by increased photometric precision, determine the current level of precision, identify the processes limiting the precision, and recommend approaches to improving photometric precision. Twenty representatives of the university, industry, and government communities participated. Results and recommendations are discussed.
[Implementation of precision control to achieve the goal of schistosomiasis elimination in China].
Zhou, Xiao-nong
2016-02-01
The integrated strategy for schistosomiasis control with focus on infectious source control, which has been implemented since 2004, accelerated the progress towards schistosomiasis control in China, and achieved transmission control of the disease across the country by the end of 2015, which achieved the overall objective of the Mid- and Long-term National Plan for Prevention and Control of Schistosomiasis (2004-2015) on schedule. Then, the goal of schistosomiasis elimination by 2025 was proposed in China in 2014. To achieve this new goal on schedule, we have to address the key issues, and implement precision control measures with more precise identification of control targets, so that we are able to completely eradicate the potential factors leading to resurgence of schistosomiasis transmission and enable the achievement of schistosomiasis elimination on schedule. Precision schistosomiasis control, a theoretical innovation of precision medicine in schistosomiasis control, will provide new insights into schistosomiasis control based on the conception of precision medicine. This paper describes the definition, interventions and the role of precision schistosomiasis control in the elimination of schistosomiasis in China, and demonstrates that sustainable improvement of professionals and integrated control capability at grass-root level is a prerequisite to the implementation of schistosomiasis control, precision schistosomiasis control is a key to the further implementation of the integrated strategy for schistosomiasis control with focus on infectious source control, and precision schistosomiasis control is a guarantee of curing schistosomiasis patients and implementing schistosomiasis control program and interventions.
Near-IR trigonometric parallaxes of nearby stars in the Galactic plane using the VVV survey
NASA Astrophysics Data System (ADS)
Beamín, J. C.; Mendez, R. A.; Smart, R. L.; Jara, R.; Kurtev, R.; Gromadzki, M.; Villanueva, V.; Minniti, D.; Smith, L. C.; Lucas, P. W.
2017-07-01
We use the multi-epoch KS band observations, covering a ˜ 5 years baseline to obtain milli and sub-milli arcsec precision astrometry for a sample of eighteen previously known high proper motion sources, including precise parallaxes for these sources for the first time. In this pioneer study we show the capability of the VVV project to measure high precision trigonometric parallaxes for very low mass stars (VLMS) up to distances of ˜ 400 pc reaching farther than most other ground based surveys or space missions for these types of stars. Two stars in our sample are low mass companions to sources in the TGAS catalog, the VVV astrometry of the fainter source is consistent within 1-σ with the astrometry for the primary source in TGAS catalog, confirming the excellent astrometric quality of the VVV data even nearby of saturated sources, as in these cases. Additionally, we used spectral energy distribution to search for evidence of unresolved binary systems and cool sub-dwarfs. We detected five systems that are most likely VLMS belonging to the Galactic halo based on their tangential velocities, and four objects within 60 pc that are likely members of the thick disk. A more comprehensive study of high proper motion sources and parallaxes of VLMS and brown dwarfs with the VVV is ongoing, including thousands of newly discovered objects (Kurtev et al. 2016).
Precision of working memory for visual motion sequences and transparent motion surfaces.
Zokaei, Nahid; Gorgoraptis, Nikos; Bahrami, Bahador; Bays, Paul M; Husain, Masud
2011-12-01
Recent studies investigating working memory for location, color, and orientation support a dynamic resource model. We examined whether this might also apply to motion, using random dot kinematograms (RDKs) presented sequentially or simultaneously. Mean precision for motion direction declined as sequence length increased, with precision being lower for earlier RDKs. Two alternative models of working memory were compared specifically to distinguish between the contributions of different sources of error that corrupt memory (W. Zhang & S. J. Luck, 2008 vs. P. M. Bays, R. F. G. Catalao, & M. Husain, 2009). The latter provided a significantly better fit for the data, revealing that decrease in memory precision for earlier items is explained by an increase in interference from other items in a sequence rather than random guessing or a temporal decay of information. Misbinding feature attributes is an important source of error in working memory. Precision of memory for motion direction decreased when two RDKs were presented simultaneously as transparent surfaces, compared to sequential RDKs. However, precision was enhanced when one motion surface was prioritized, demonstrating that selective attention can improve recall precision. These results are consistent with a resource model that can be used as a general conceptual framework for understanding working memory across a range of visual features.
Metering gun for dispensing precisely measured charges of fluid
NASA Technical Reports Server (NTRS)
Cook, T. A.; Scheibe, H. (Inventor)
1974-01-01
A cyclically operable fluid dispenser for use in dispensing precisely measured charges of potable water aboard spacecraft is described. The dispenser is characterized by (1) a sealed housing adapted to be held within a crewman's palm and coupled with a pressurized source of potable water; (2) a dispensing jet projected from the housing and configured to be received within a crewman's lips; (3) an expansible measuring chamber for measuring charges of drinking water received from the source; (4) and a dispenser actuator including a lever extended from the housing to be digitated for initiating operational cycles, whereby precisely measured charges of potable water selectively are delivered for drinking purposes in a weightless environment.
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
A critical assessment of Mus musculus gene function prediction using integrated genomic evidence
Peña-Castillo, Lourdes; Tasan, Murat; Myers, Chad L; Lee, Hyunju; Joshi, Trupti; Zhang, Chao; Guan, Yuanfang; Leone, Michele; Pagnani, Andrea; Kim, Wan Kyu; Krumpelman, Chase; Tian, Weidong; Obozinski, Guillaume; Qi, Yanjun; Mostafavi, Sara; Lin, Guan Ning; Berriz, Gabriel F; Gibbons, Francis D; Lanckriet, Gert; Qiu, Jian; Grant, Charles; Barutcuoglu, Zafer; Hill, David P; Warde-Farley, David; Grouios, Chris; Ray, Debajyoti; Blake, Judith A; Deng, Minghua; Jordan, Michael I; Noble, William S; Morris, Quaid; Klein-Seetharaman, Judith; Bar-Joseph, Ziv; Chen, Ting; Sun, Fengzhu; Troyanskaya, Olga G; Marcotte, Edward M; Xu, Dong; Hughes, Timothy R; Roth, Frederick P
2008-01-01
Background: Several years after sequencing the human genome and the mouse genome, much remains to be discovered about the functions of most human and mouse genes. Computational prediction of gene function promises to help focus limited experimental resources on the most likely hypotheses. Several algorithms using diverse genomic data have been applied to this task in model organisms; however, the performance of such approaches in mammals has not yet been evaluated. Results: In this study, a standardized collection of mouse functional genomic data was assembled; nine bioinformatics teams used this data set to independently train classifiers and generate predictions of function, as defined by Gene Ontology (GO) terms, for 21,603 mouse genes; and the best performing submissions were combined in a single set of predictions. We identified strengths and weaknesses of current functional genomic data sets and compared the performance of function prediction algorithms. This analysis inferred functions for 76% of mouse genes, including 5,000 currently uncharacterized genes. At a recall rate of 20%, a unified set of predictions averaged 41% precision, with 26% of GO terms achieving a precision better than 90%. Conclusion: We performed a systematic evaluation of diverse, independently developed computational approaches for predicting gene function from heterogeneous data sources in mammals. The results show that currently available data for mammals allows predictions with both breadth and accuracy. Importantly, many highly novel predictions emerge for the 38% of mouse genes that remain uncharacterized. PMID:18613946
Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap
NASA Astrophysics Data System (ADS)
Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.
2016-06-01
At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; ...
2017-09-13
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
NASA Astrophysics Data System (ADS)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit; Suard, Maxime; Lenschow, Donald H.; Sweeney, Colm; Herndon, Scott; Schwietzke, Stefan; Pétron, Gabrielle; Pifer, Justin; Kort, Eric A.; Schnell, Russell
2017-09-01
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be made and apply it to methane, ethane, and carbon dioxide on spatial scales of ˜ 1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance
methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h-1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, Calen B., E-mail: henderson@astronomy.ohio-state.edu
2015-02-10
I investigate the possibility of constraining the flux of the lens (i.e., host star) for the types of planetary systems the Korean Microlensing Telescope Network is predicted to find. I examine the potential to obtain lens flux measurements by (1) imaging the lens once it is spatially resolved from the source, (2) measuring the elongation of the point-spread function of the microlensing target (lens+source) when the lens and source are still unresolved, and (3) taking prompt follow-up photometry. In each case I simulate the observing programs for a representative example of current ground-based adaptive optics (AO) facilities (specifically NACO onmore » the Very Large Telescope), future ground-based AO facilities (GMTIFS on the Giant Magellan Telescope, GMT), and future space telescopes (NIRCAM on the James Webb Space Telescope, JWST). Given the predicted distribution of relative lens-source proper motions, I find that the lens flux could be measured to a precision of σ{sub H{sub ℓ}}≤0.1 for ≳60% of planet detections ≥5 yr after each microlensing event for a simulated observing program using GMT, which images resolved lenses. NIRCAM on JWST would be able to carry out equivalently high-precision measurements for ∼28% of events Δt = 10 yr after each event by imaging resolved lenses. I also explore the effects various blend components would have on the mass derived from prompt follow-up photometry, including companions to the lens, companions to the source, and unassociated interloping stars. I find that undetected blend stars would cause catastrophic failures (i.e., >50% fractional uncertainty in the inferred lens mass) for ≲ (16 · f {sub bin})% of planet detections, where f {sub bin} is the binary fraction, with the majority of these failures occurring for host stars with mass ≲0.3 M {sub ☉}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, Stephen; Faloona, Ian; Mehrotra, Shobhit
Airborne estimates of greenhouse gas emissions are becoming more prevalent with the advent of rapid commercial development of trace gas instrumentation featuring increased measurement accuracy, precision, and frequency, and the swelling interest in the verification of current emission inventories. Multiple airborne studies have indicated that emission inventories may underestimate some hydrocarbon emission sources in US oil- and gas-producing basins. Consequently, a proper assessment of the accuracy of these airborne methods is crucial to interpreting the meaning of such discrepancies. We present a new method of sampling surface sources of any trace gas for which fast and precise measurements can be mademore » and apply it to methane, ethane, and carbon dioxide on spatial scales of ~1000 m, where consecutive loops are flown around a targeted source region at multiple altitudes. Using Reynolds decomposition for the scalar concentrations, along with Gauss's theorem, we show that the method accurately accounts for the smaller-scale turbulent dispersion of the local plume, which is often ignored in other average mass balance methods. With the help of large eddy simulations (LES) we further show how the circling radius can be optimized for the micrometeorological conditions encountered during any flight. Furthermore, by sampling controlled releases of methane and ethane on the ground we can ascertain that the accuracy of the method, in appropriate meteorological conditions, is often better than 10 %, with limits of detection below 5 kg h -1 for both methane and ethane. Because of the FAA-mandated minimum flight safe altitude of 150 m, placement of the aircraft is critical to preventing a large portion of the emission plume from flowing underneath the lowest aircraft sampling altitude, which is generally the leading source of uncertainty in these measurements. Finally, we show how the accuracy of the method is strongly dependent on the number of sampling loops and/or time spent sampling the source plume.« less
Personalized medicine and chronic obstructive pulmonary disease.
Wouters, E F M; Wouters, B B R A F; Augustin, I M L; Franssen, F M E
2017-05-01
The current review summarizes ongoing developments in personalized medicine and precision medicine in chronic obstructive pulmonary disease (COPD). Our current approach is far away of personalized management algorithms as current recommendations for COPD are largely based on a reductionist disease description, operationally defined by results of spirometry. Besides precision medicine developments, a personalized medicine approach in COPD is described based on a holistic approach of the patient and considering illness as the consequence of dynamic interactions within and between multiple interacting and self-adjusting systems. Pulmonary rehabilitation is described as a model of personalized medicine. Largely based on current understanding of inflammatory processes in COPD, targeted interventions in COPD are reviewed. Augmentation therapy for α-1-antitrypsine deficiency is described as model of precision medicine in COPD based in profound understanding of the related genetic endotype. Future developments of precision medicine in COPD require identification of relevant endotypes combined with proper identification of phenotypes involved in the complex and heterogeneous manifestations of COPD.
Urine biomarkers in the early stages of diseases: current status and perspective.
Jing, Jian; Gao, Youhe
2018-02-01
As a noninvasive and easily available biological fluid, the urine is becoming an important source for disease biomarker study. Change is essential for the usefulness of a biomarker. Without homeostasis mechanisms, urine can accommodate more changes, especially in the early stages of diseases. In this review, we summarize current status and discuss perspectives on the discovery of urine biomarkers in the early stages of diseases. We emphasize the advantages of urine biomarkers compared to plasma biomarkers for the diagnosis of diseases at early stages, propose a urine biomarker research roadmap, and highlight a novel membrane storage technique that enables large-scale urine sample collection and storage efficiently and economically. It is anticipated that urine biomarker studies will greatly promote early diagnosis, prevention, treatment, and prognosis of a variety of diseases, and provide strong support for translational and precision medicine.
UNDULATOR-BASED LASER WAKEFIELD ACCELERATOR ELECTRON BEAM DIAGNOSTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakeman, M.S.; Fawley, W.M.; Leemans, W. P.
to couple the THUNDER undulator to the LOASIS Lawrence Berkeley National Laboratory (LBNL) laser wakefield accelerator (LWFA). Currently the LWFA has achieved quasi-monoenergetic electron beams with energies up to 1 GeV. These ultra-short, high-peak-current, electron beams are ideal for driving a compact XUV free electron laser (FEL). Understanding the electron beam properties such as the energy spread and emittance is critical for achieving high quality light sources with high brightness. By using an insertion device such as an undulator and observing changes in the spontaneous emission spectrum, the electron beam energy spread and emittance can be measured with high precision.more » The initial experiments will use spontaneous emission from 1.5 m of undulator. Later experiments will use up to 5 m of undulator with a goal of a high gain, XUV FEL.« less
Precision disablement aiming system
Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott
2016-02-16
A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.
Inversion of Acoustic and Electromagnetic Recordings for Mapping Current Flow in Lightning Strikes
NASA Astrophysics Data System (ADS)
Anderson, J.; Johnson, J.; Arechiga, R. O.; Thomas, R. J.
2012-12-01
Acoustic recordings can be used to map current-carrying conduits in lightning strikes. Unlike stepped leaders, whose very high frequency (VHF) radio emissions have short (meter-scale) wavelengths and can be located by lightning-mapping arrays, current pulses emit longer (kilometer-scale) waves and cannot be mapped precisely by electromagnetic observations alone. While current pulses are constrained to conductive channels created by stepped leaders, these leaders often branch as they propagate, and most branches fail to carry current. Here, we present a method to use thunder recordings to map current pulses, and we apply it to acoustic and VHF data recorded in 2009 in the Magdalena mountains in central New Mexico, USA. Thunder is produced by rapid heating and expansion of the atmosphere along conductive channels in response to current flow, and therefore can be used to recover the geometry of the current-carrying channel. Toward this goal, we use VHF pulse maps to identify candidate conductive channels where we treat each channel as a superposition of finely-spaced acoustic point sources. We apply ray tracing in variable atmospheric structures to forward model the thunder that our microphone network would record for each candidate channel. Because multiple channels could potentially carry current, a non-linear inversion is performed to determine the acoustic source strength of each channel. For each combination of acoustic source strengths, synthetic thunder is modeled as a superposition of thunder signals produced by each channel, and a power envelope of this stack is then calculated. The inversion iteratively minimizes the misfit between power envelopes of recorded and modeled thunder. Because the atmospheric sound speed structure through which the waves propagate during these events is unknown, we repeat the procedure on many plausible atmospheres to find an optimal fit. We then determine the candidate channel, or channels, that minimizes residuals between synthetic and acoustic recordings. We demonstrate the usefulness of this method on both intracloud and cloud-to-ground strikes, and discuss factors affecting our ability to replicate recorded thunder.
Kim, Do-Won; Lee, Seung-Hwan; Shim, Miseon; Im, Chang-Hwan
2017-01-01
Precise diagnosis of psychiatric diseases and a comprehensive assessment of a patient's symptom severity are important in order to establish a successful treatment strategy for each patient. Although great efforts have been devoted to searching for diagnostic biomarkers of schizophrenia over the past several decades, no study has yet investigated how accurately these biomarkers are able to estimate an individual patient's symptom severity. In this study, we applied electrophysiological biomarkers obtained from electroencephalography (EEG) analyses to an estimation of symptom severity scores of patients with schizophrenia. EEG signals were recorded from 23 patients while they performed a facial affect discrimination task. Based on the source current density analysis results, we extracted voxels that showed a strong correlation between source activity and symptom scores. We then built a prediction model to estimate the symptom severity scores of each patient using the source activations of the selected voxels. The symptom scores of the Positive and Negative Syndrome Scale (PANSS) were estimated using the linear prediction model. The results of leave-one-out cross validation (LOOCV) showed that the mean errors of the estimated symptom scores were 3.34 ± 2.40 and 3.90 ± 3.01 for the Positive and Negative PANSS scores, respectively. The current pilot study is the first attempt to estimate symptom severity scores in schizophrenia using quantitative EEG features. It is expected that the present method can be extended to other cognitive paradigms or other psychological illnesses.
NASA Astrophysics Data System (ADS)
Choi, S. G.; Kim, S. H.; Choi, W. K.; Moon, G. C.; Lee, E. S.
2017-06-01
Shape memory alloy (SMA) is important material used for the medicine and aerospace industry due to its characteristics called the shape memory effect, which involves the recovery of deformed alloy to its original state through the application of temperature or stress. Consumers in modern society demand stability in parts. Electrochemical machining is one of the methods for obtained these stabilities in parts requirements. These parts of shape memory alloy require fine patterns in some applications. In order to machine a fine pattern, the electrochemical machining method is suitable. For precision electrochemical machining using different shape electrodes, the current density should be controlled precisely. And electrode shape is required for precise electrochemical machining. It is possible to obtain precise square holes on the SMA if the insulation layer controlled the unnecessary current between electrode and workpiece. If it is adjusting the unnecessary current to obtain the desired shape, it will be a great contribution to the medical industry and the aerospace industry. It is possible to process a desired shape to the shape memory alloy by micro controlling the unnecessary current. In case of the square electrode without insulation layer, it derives inexact square holes due to the unnecessary current. The results using the insulated electrode in only side show precise square holes. The removal rate improved in case of insulated electrode than others because insulation layer concentrate the applied current to the machining zone.
Huang, Yu; Parra, Lucas C.; Haufe, Stefan
2018-01-01
In source localization of electroencephalograpic (EEG) signals, as well as in targeted transcranial electric current stimulation (tES), a volume conductor model is required to describe the flow of electric currents in the head. Boundary element models (BEM) can be readily computed to represent major tissue compartments, but cannot encode detailed anatomical information within compartments. Finite element models (FEM) can capture more tissue types and intricate anatomical structures, but with the higher precision also comes the need for semiautomated segmentation, and a higher computational cost. In either case, adjusting to the individual human anatomy requires costly magnetic resonance imaging (MRI), and thus head modeling is often based on the anatomy of an ‘arbitrary’ individual (e.g. Colin27). Additionally, existing reference models for the human head often do not include the cerebrospinal fluid (CSF), and their field of view excludes portions of the head and neck—two factors that demonstrably affect current-flow patterns. Here we present a highly detailed FEM, which we call ICBM-NY, or “New York Head”. It is based on the ICBM152 anatomical template (a non-linear average of the MRI of 152 adult human brains) defined in MNI coordinates, for which we extended the field of view to the neck and performed a detailed segmentation of six tissue types (scalp, skull, CSF, gray matter, white matter, air cavities) at 0.5 mm 3 resolution. The model was solved for 231 electrode locations. To evaluate its performance, additional FEMs and BEMs were constructed for four individual subjects. Each of the four individual FEMs (regarded as the ‘ground truth’) is compared to its BEM counterpart, the ICBM-NY, a BEM of the ICBM anatomy, an ‘individualized’ BEM of the ICBM anatomy warped to the individual head surface, and FEMs of the other individuals. Performance is measured in terms of EEG source localization and tES targeting errors. Results show that the ICBM-NY outperforms FEMs of mismatched individual anatomies as well as the BEM of the ICBM anatomy according to both criteria. We therefore propose the New York Head as a new standard head model to be used in future EEG and tES studies whenever an individual MRI is not available. We release all model data online at neuralengr.com/nyhead/ to facilitate broad adoption. PMID:26706450
Huang, Yu; Parra, Lucas C; Haufe, Stefan
2016-10-15
In source localization of electroencephalograpic (EEG) signals, as well as in targeted transcranial electric current stimulation (tES), a volume conductor model is required to describe the flow of electric currents in the head. Boundary element models (BEM) can be readily computed to represent major tissue compartments, but cannot encode detailed anatomical information within compartments. Finite element models (FEM) can capture more tissue types and intricate anatomical structures, but with the higher precision also comes the need for semi-automated segmentation, and a higher computational cost. In either case, adjusting to the individual human anatomy requires costly magnetic resonance imaging (MRI), and thus head modeling is often based on the anatomy of an 'arbitrary' individual (e.g. Colin27). Additionally, existing reference models for the human head often do not include the cerebro-spinal fluid (CSF), and their field of view excludes portions of the head and neck-two factors that demonstrably affect current-flow patterns. Here we present a highly detailed FEM, which we call ICBM-NY, or "New York Head". It is based on the ICBM152 anatomical template (a non-linear average of the MRI of 152 adult human brains) defined in MNI coordinates, for which we extended the field of view to the neck and performed a detailed segmentation of six tissue types (scalp, skull, CSF, gray matter, white matter, air cavities) at 0.5mm(3) resolution. The model was solved for 231 electrode locations. To evaluate its performance, additional FEMs and BEMs were constructed for four individual subjects. Each of the four individual FEMs (regarded as the 'ground truth') is compared to its BEM counterpart, the ICBM-NY, a BEM of the ICBM anatomy, an 'individualized' BEM of the ICBM anatomy warped to the individual head surface, and FEMs of the other individuals. Performance is measured in terms of EEG source localization and tES targeting errors. Results show that the ICBM-NY outperforms FEMs of mismatched individual anatomies as well as the BEM of the ICBM anatomy according to both criteria. We therefore propose the New York Head as a new standard head model to be used in future EEG and tES studies whenever an individual MRI is not available. We release all model data online at neuralengr.com/nyhead/ to facilitate broad adoption. Published by Elsevier Inc.
Precision Medicine in Gastrointestinal Pathology.
Wang, David H; Park, Jason Y
2016-05-01
-Precision medicine is the promise of individualized therapy and management of patients based on their personal biology. There are now multiple global initiatives to perform whole-genome sequencing on millions of individuals. In the United States, an early program was the Million Veteran Program, and a more recent proposal in 2015 by the president of the United States is the Precision Medicine Initiative. To implement precision medicine in routine oncology care, genetic variants present in tumors need to be matched with effective clinical therapeutics. When we focus on the current state of precision medicine for gastrointestinal malignancies, it becomes apparent that there is a mixed history of success and failure. -To present the current state of precision medicine using gastrointestinal oncology as a model. We will present currently available targeted therapeutics, promising new findings in clinical genomic oncology, remaining quality issues in genomic testing, and emerging oncology clinical trial designs. -Review of the literature including clinical genomic studies on gastrointestinal malignancies, clinical oncology trials on therapeutics targeted to molecular alterations, and emerging clinical oncology study designs. -Translating our ability to sequence thousands of genes into meaningful improvements in patient survival will be the challenge for the next decade.
Bloomgarden, Z T; Inzucchi, S E; Karnieli, E; Le Roith, D
2008-07-01
The proposed use of a more precise standard for glycated (A(1c)) and non-glycated haemoglobin would lead to an A(1c) value, when expressed as a percentage, that is lower than that currently in use. One approach advocated to address the potential confusion that would ensue is to replace 'HbA(1c)' with a new term, 'A(1c)-derived average glucose.' We review evidence from several sources suggesting that A(1c) is, in fact, inherently imprecise as a measure of average glucose, so that the proposed terminology should not be adopted.
Toward Millimagnitude Photometric Calibration (Abstract)
NASA Astrophysics Data System (ADS)
Dose, E.
2014-12-01
(Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.
NEID Port Adapter: Design and Verification Plan
NASA Astrophysics Data System (ADS)
Logsdon, Sarah E.; McElwain, Michael; McElwain, Michael W.; Gong, Qian; Bender, Chad; Halverson, Samuel; Hearty, Fred; Hunting, Emily; Jaehnig, Kurt; Liang, Ming; Mahadevan, Suvrath; Monson, A. J.; Percival, Jeffrey; Rajagopal, Jayadev; Ramsey, Lawrence; Roy, Arpita; Santoro, Fernando; Schwab, Christian; Smith, Michael; Wolf, Marsha; Wright, Jason
2018-01-01
The NEID spectrograph is an optical (380-930 nm), fiber-fed, precision Doppler spectrograph currently in development for the 3.5 m WIYN Telescope at Kitt Peak National Observatory. Designed to achieve a radial velocity precision of <30 cm/s, NEID will be sensitive enough to detect terrestrial-mass exoplanets around low-mass stars. Light from the target stars is focused by the telescope to a bent-Cassegrain port at the edge of the primary mirror mechanical support. The specialized NEID “Port Adapter” system is mounted at this bent-Cassegrain port and is responsible for delivering the incident light from the telescope to the NEID fibers. In order to provide stable, high-quality images to the science instrument, the Port Adapter houses several subcomponents designed to acquire the target stars, correct for atmospheric dispersion, stabilize the light onto the science fibers, and calibrate the spectrograph by injecting known wavelength sources such as a laser frequency comb. Here we describe the overall design of the Port Adapter and outline the development of calibration tools and an on-sky test plan to verify the performance of the atmospheric dispersion corrector (ADC). We also discuss the development of an error budget and test requirements to ensure high-precision centroiding onto the NEID science fibers using a system of coherent fiber bundles.
The tracking analysis in the Q-weak experiment
NASA Astrophysics Data System (ADS)
Pan, J.; Androic, D.; Armstrong, D. S.; Asaturyan, A.; Averett, T.; Balewski, J.; Beaufait, J.; Beminiwattha, R. S.; Benesch, J.; Benmokhtar, F.; Birchall, J.; Carlini, R. D.; Cates, G. D.; Cornejo, J. C.; Covrig, S.; Dalton, M. M.; Davis, C. A.; Deconinck, W.; Diefenbach, J.; Dowd, J. F.; Dunne, J. A.; Dutta, D.; Duvall, W. S.; Elaasar, M.; Falk, W. R.; Finn, J. M.; Forest, T.; Gaskell, D.; Gericke, M. T. W.; Grames, J.; Gray, V. M.; Grimm, K.; Guo, F.; Hoskins, J. R.; Johnston, K.; Jones, D.; Jones, M.; Jones, R.; Kargiantoulakis, M.; King, P. M.; Korkmaz, E.; Kowalski, S.; Leacock, J.; Leckey, J.; Lee, A. R.; Lee, J. H.; Lee, L.; MacEwan, S.; Mack, D.; Magee, J. A.; Mahurin, R.; Mammei, J.; Martin, J. W.; McHugh, M. J.; Meekins, D.; Mei, J.; Michaels, R.; Micherdzinska, A.; Mkrtchyan, A.; Mkrtchyan, H.; Morgan, N.; Myers, K. E.; Narayan, A.; Ndukum, L. Z.; Nelyubin, V.; Nuruzzaman; van Oers, W. T. H.; Opper, A. K.; Page, S. A.; Pan, J.; Paschke, K. D.; Phillips, S. K.; Pitt, M. L.; Poelker, M.; Rajotte, J. F.; Ramsay, W. D.; Roche, J.; Sawatzky, B.; Seva, T.; Shabestari, M. H.; Silwal, R.; Simicevic, N.; Smith, G. R.; Solvignon, P.; Spayde, D. T.; Subedi, A.; Subedi, R.; Suleiman, R.; Tadevosyan, V.; Tobias, W. A.; Tvaskis, V.; Waidyawansa, B.; Wang, P.; Wells, S. P.; Wood, S. A.; Yang, S.; Young, R. D.; Zhamkochyan, S.
2016-12-01
The Q-weak experiment at Jefferson Laboratory measured the parity violating asymmetry ( A P V ) in elastic electron-proton scattering at small momentum transfer squared ( Q 2=0.025 ( G e V/ c)2), with the aim of extracting the proton's weak charge ({Q^p_W}) to an accuracy of 5 %. As one of the major uncertainty contribution sources to {Q^p_W}, Q 2 needs to be determined to ˜1 % so as to reach the proposed experimental precision. For this purpose, two sets of high resolution tracking chambers were employed in the experiment, to measure tracks before and after the magnetic spectrometer. Data collected by the tracking system were then reconstructed with dedicated software into individual electron trajectories for experimental kinematics determination. The Q-weak kinematics and the analysis scheme for tracking data are briefly described here. The sources that contribute to the uncertainty of Q 2 are discussed, and the current analysis status is reported.
NASA Astrophysics Data System (ADS)
Kim-Hak, D.; Fleck, D.
2017-12-01
Natural gas analysis and methane specifically have become increasingly important by virtue of methane's 28-36x greenhouse warming potential compared to CO2 and accounting for 10% of total greenhouse gas emissions in the US alone. Additionally, large uncontrolled leaks, such as the recent one from Aliso Canyon in Southern California, originating from uncapped wells, storage facilities and coal mines have increased the total global contribution of methane missions even further. Determining the specific fingerprint of methane sources by quantifying the ethane to methane (C2:C1) ratios provides us with means to understand processes yielding methane and allows for sources of methane to be mapped and classified through these processes; i.e. biogenic or thermogenic, oil vs. gas vs. coal gas-related. Here we present data obtained using a portable cavity ring-down spectrometry analyzer weighing less than 25 lbs and consuming less than 35W that simultaneously measures methane and ethane in real-time with a raw 1-σ precision of <30 ppb and <10 ppb, respectively at <1 Hz. These precisions allow for a C2:C1 ratio 1-σ measurement of <0.1% above 10 ppm in a single measurement. Furthermore, a high precision methane only mode is available for surveying and locating leakage with a 1-σ precision of <3 ppb. Source discrimination data of local leaks and methane sources using this analysis method are presented. Additionally, two-dimensional plume snapshots are constructed using an integrated onboard GPS in order to visualize horizontal plane gas propagation.
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
Methods for applying accurate digital PCR analysis on low copy DNA samples.
Whale, Alexandra S; Cowen, Simon; Foy, Carole A; Huggett, Jim F
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA.
Methods for Applying Accurate Digital PCR Analysis on Low Copy DNA Samples
Whale, Alexandra S.; Cowen, Simon; Foy, Carole A.; Huggett, Jim F.
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA. PMID:23472156
Precision CW laser automatic tracking system investigated
NASA Technical Reports Server (NTRS)
Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.
1966-01-01
Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.
Precision Mass Property Measurements Using a Five-Wire Torsion Pendulum
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
A method for measuring the moment of inertia of an object using a five-wire torsion pendulum design is described here. Typical moment of inertia measurement devices are capable of 1 part in 10(exp 3) accuracy and current state of the art techniques have capabilities of about one part in 10(exp 4). The five-wire apparatus design shows the prospect of improving on current state of the art. Current measurements using a laboratory prototype indicate a moment of inertia measurement precision better than a part in 10(exp 4). In addition, the apparatus is shown to be capable of measuring the mass center offset from the geometric center. Typical mass center measurement devices exhibit a measurement precision up to approximately 1 micrometer. Although the five-wire pendulum was not originally designed for mass center measurements, preliminary results indicate an apparatus with a similar design may have the potential of achieving state of the art precision.
A research on the positioning technology of vehicle navigation system from single source to "ASPN"
NASA Astrophysics Data System (ADS)
Zhang, Jing; Li, Haizhou; Chen, Yu; Chen, Hongyue; Sun, Qian
2017-10-01
Due to the suddenness and complexity of modern warfare, land-based weapon systems need to have precision strike capability on roads and railways. The vehicle navigation system is one of the most important equipments for the land-based weapon systems that have precision strick capability. There are inherent shortcomings for single source navigation systems to provide continuous and stable navigation information. To overcome the shortcomings, the multi-source positioning technology is developed. The All Source Positioning and Navigaiton (ASPN) program was proposed in 2010, which seeks to enable low cost, robust, and seamless navigation solutions for military to use on any operational platform and in any environment with or without GPS. The development trend of vehicle positioning technology was reviewed in this paper. The trend indicates that the positioning technology is developed from single source and multi-source to ASPN. The data fusion techniques based on multi-source and ASPN was analyzed in detail.
Into the deep: Evaluation of SourceTracker for assessment of faecal contamination of coastal waters.
Henry, Rebekah; Schang, Christelle; Coutts, Scott; Kolotelo, Peter; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O'Brien, Peter; Deletic, Ana; McCarthy, David
2016-04-15
Faecal contamination of recreational waters is an increasing global health concern. Tracing the source of the contaminant is a vital step towards mitigation and disease prevention. Total 16S rRNA amplicon data for a specific environment (faeces, water, soil) and computational tools such as the Markov-Chain Monte Carlo based SourceTracker can be applied to microbial source tracking (MST) and attribution studies. The current study applied artificial and in-laboratory derived bacterial communities to define the potential and limitations associated with the use of SourceTracker, prior to its application for faecal source tracking at three recreational beaches near Port Phillip Bay (Victoria, Australia). The results demonstrated that at minimum multiple model runs of the SourceTracker modelling tool (i.e. technical replicates) were required to identify potential false positive predictions. The calculation of relative standard deviations (RSDs) for each attributed source improved overall predictive confidence in the results. In general, default parameter settings provided high sensitivity, specificity, accuracy and precision. Application of SourceTracker to recreational beach samples identified treated effluent as major source of human-derived faecal contamination, present in 69% of samples. Site-specific sources, such as raw sewage, stormwater and bacterial populations associated with the Yarra River estuary were also identified. Rainfall and associated sand resuspension at each location correlated with observed human faecal indicators. The results of the optimised SourceTracker analysis suggests that local sources of contamination have the greatest effect on recreational coastal water quality. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the role of differenced phase-delays in high-precision wide-field multi-source astrometry
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Guirado, J. C.
2007-07-01
Phase-delay is, by far, the most precise observable used in interferometry. In typical very-long-baseline-interferometry (VLBI) observations, the uncertainties of the phase-delays can be about 100 times smaller than those of the group delays. However, the phase-delays have an important handicap: they are ambiguous, since they are computed from the relative phases of the signals of the different antennas, and an indeterminate number of complete 2¶- cycles can be added to those phases leaving them unchanged. There are different approaches to solve the ambiguity problem of the phase delays (Shapiro et al., 1979; Beasley & Conway, 1995), but none of them has been ever used in observations involving more than 2.3 sources. In this contribution, we will report for the first-time wide-field multi-source astrometric analysis that has been performed on a complete set of radio sources using the phase-delay observable. The target of our analysis is the S5 polar cap sample, consisting on 13 bright ICRF sources near the North Celestial Pole. We have developed new algorithms and updated existing software to correct, in an automatic way, the ambiguities of the phase-delay and, therefore, perform a phasedelay astrometric analysis of all the sources in the sample. We will also discuss on the impact of the use of phase-delays in the astrometric precision.
NASA Astrophysics Data System (ADS)
Morgenthaler, George; Khatib, Nader; Kim, Byoungsoo
with information to improve their crop's vigor has been a major topic of interest. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, the efficiency of farming must increase to meet future food requirements and to make farming a sustainable occupation for the farmer. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The goal is to increase farm revenue by increasing crop yield and decreasing applications of costly chemical and water treatments. In addition, this methodology will decrease the environmental costs of farming, i.e., reduce air, soil, and water pollution. Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now available. Commercial satellite systems can image (multi-spectral) the Earth with a resolution of approximately 2.5 m. Variable precision dispensing systems using GPS are available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been formulated. Personal computers and internet access are in place in most farm homes and can provide a mechanism to periodically disseminate, e.g. bi-weekly, advice on what quantities of water and chemicals are needed in individual regions of the field. What is missing is a model that fuses the disparate sources of information on the current states of the crop and soil, and the remaining resource levels available with the decisions farmers are required to make. This must be a product that is easy for the farmer to understand and to implement. A "Constrained Optimization Feed-back Control Model" to fill this void will be presented. The objective function of the model will be used to maximize the farmer's profit by increasing yields while decreasing environmental costs and decreasing application of costly treatments. This model will incorporate information from remote sensing, in-situ weather sources, soil measurements, crop models, and tacit farmer knowledge of the relative productivity of the selected control regions of the farm to provide incremental advice throughout the growing season on water and chemical treatments. Genetic and meta-heuristic algorithms will be used to solve the constrained optimization problem that possesses complex constraints and a non-linear objective function. *
Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo
2012-02-01
In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Glenar, D.; Kostiuk, T.; Jennings, D. E.; Mumma, M. J.
1980-01-01
A diode laser based IR heterodyne spectrometer for laboratory and field use was developed for high efficiency operation between 7.5 and 8.5 microns. The local oscillator is a PbSSe tunable diode laser kept continuously at operating temperatures of 12-60 K using a closed cycle cooler. The laser output frequency is controlled and stabilized using a high precision diode current supply, constant temperature controller, and a shock isolator mounted between the refrigerator cold tip and the diode mount. Single laser modes are selected by a grating placed in the local oscillator beam. The system employs reflecting optics throughout to minimize losses from internal reflection and absorption, and to eliminate chromatic effects. Spectral analysis of the diode laser output between 0 and 1 GHz reveals excess noise at many diode current settings, which limits the infrared spectral regions over which useful heterodyne operation can be achieved. System performance has been studied by making heterodyne measurements of etalon fringes and several Freon 13 (CF3Cl) absorption lines against a laboratory blackbody source. Preliminary field tests have also been performed using the Sun as a source.
Pikin, A; Beebe, E N; Raparia, D
2013-03-01
Increasing the current density of the electron beam in the ion trap of the Electron Beam Ion Source (EBIS) in BNL's Relativistic Heavy Ion Collider facility would confer several essential benefits. They include increasing the ions' charge states, and therefore, the ions' energy out of the Booster for NASA applications, reducing the influx of residual ions in the ion trap, lowering the average power load on the electron collector, and possibly also reducing the emittance of the extracted ion beam. Here, we discuss our findings from a computer simulation of an electron gun with electrostatic compression for electron current up to 10 A that can deliver a high-current-density electron beam for EBIS. The magnetic field in the cathode-anode gap is formed with a magnetic shield surrounding the gun electrodes and the residual magnetic field on the cathode is (5 ÷ 6) Gs. It was demonstrated that for optimized gun geometry within the electron beam current range of (0.5 ÷ 10) A the amplitude of radial beam oscillations can be maintained close to 4% of the beam radius by adjusting the injection magnetic field generated by a separate magnetic coil. Simulating the performance of the gun by varying geometrical parameters indicated that the original gun model is close to optimum and the requirements to the precision of positioning the gun elements can be easily met with conventional technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pikin, A.; Beebe, E. N.; Raparia, D.
Increasing the current density of the electron beam in the ion trap of the Electron Beam Ion Source (EBIS) in BNL's Relativistic Heavy Ion Collider facility would confer several essential benefits. They include increasing the ions' charge states, and therefore, the ions' energy out of the Booster for NASA applications, reducing the influx of residual ions in the ion trap, lowering the average power load on the electron collector, and possibly also reducing the emittance of the extracted ion beam. Here, we discuss our findings from a computer simulation of an electron gun with electrostatic compression for electron current upmore » to 10 A that can deliver a high-current-density electron beam for EBIS. The magnetic field in the cathode-anode gap is formed with a magnetic shield surrounding the gun electrodes and the residual magnetic field on the cathode is (5 Division-Sign 6) Gs. It was demonstrated that for optimized gun geometry within the electron beam current range of (0.5 Division-Sign 10) A the amplitude of radial beam oscillations can be maintained close to 4% of the beam radius by adjusting the injection magnetic field generated by a separate magnetic coil. Simulating the performance of the gun by varying geometrical parameters indicated that the original gun model is close to optimum and the requirements to the precision of positioning the gun elements can be easily met with conventional technology.« less
Airborne and satellite remote sensors for precision agriculture
USDA-ARS?s Scientific Manuscript database
Remote sensing provides an important source of information to characterize soil and crop variability for both within-season and after-season management despite the availability of numerous ground-based soil and crop sensors. Remote sensing applications in precision agriculture have been steadily inc...
Current status and future trends of precision agricultural aviation technologies
USDA-ARS?s Scientific Manuscript database
Modern technologies and information tools can be used to maximize agricultural aviation productivity allowing for precision application of agrochemical products. This paper reviews and summarizes the state-of-the-art in precision agricultural aviation technology highlighting remote sensing, aerial s...
The parametrization of radio source coordinates in VLBI and its impact on the CRF
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-04-01
Usually celestial radio sources in the celestial reference frame (CRF) catalog are divided in three categories: defining, special handling, and others. The defining sources are those used for the datum realization of the celestial reference frame, i.e. they are included in the No-Net-Rotation (NNR) constraints to maintain the axis orientation of the CRF, and are modeled with one set of totally constant coordinates. At the current level of precision, the choice of the defining sources has a significant effect on the coordinates. For the ICRF2 295 sources were chosen as defining sources, based on their geometrical distribution, statistical properties, and stability. The number of defining sources is a compromise between the reliability of the datum, which increases with the number of sources, and the noise which is introduced by each source. Thus, the optimal number of defining sources is a trade-off between reliability, geometry, and precision. In the ICRF2 only 39 of sources were sorted into the special handling group as they show large fluctuations in their position, therefore they are excluded from the NNR conditions and their positions are normally estimated for each VLBI session instead of as global parameters. All the remaining sources are classified as others. However, a large fraction of these unstable sources show other favorable characteristics, e.g. large flux density (brightness) and a long history of observations. Thus, it would prove advantageous including these sources into the NNR condition. However, the instability of these objects inhibit this. If the coordinate model of these sources would be extended, it would be possible to use these sources for the NNR condition as well. All other sources are placed in the "others" group. This is the largest group of sources, containing those which have not shown any very problematic behavior, but still do not fulfill the requirements for defining sources. Studies show that the behavior of each source can vary dramatically in time. Hence, each source would have to be modeled individually. Considering this, the shear amount of sources, in our study more than 600 are included, sets practical limitations. We decided to use the multivariate adaptive regression splines (MARS) procedure to parametrize the source coordinates, as they allow a great deal of automation as it combines recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and thus the best number of polynomial pieces to fit the data. We investigate linear and cubic splines determined by MARS to "human" determined linear splines and their impact on the CRF. Within this work we try to answer the following questions: How can we find optimal criteria for the definition of the defining and unstable sources? What are the best polynomials for the individual categories? How much can we improve the CRF by extending the parametrization of the sources?
The prospects of pulsar timing with new-generation radio telescopes and the Square Kilometre Array
NASA Astrophysics Data System (ADS)
Stappers, B. W.; Keane, E. F.; Kramer, M.; Possenti, A.; Stairs, I. H.
2018-05-01
Pulsars are highly magnetized and rapidly rotating neutron stars. As they spin, the lighthouse-like beam of radio emission from their magnetic poles sweeps across the Earth with a regularity approaching that of the most precise clocks known. This precision combined with the extreme environments in which they are found, often in compact orbits with other neutron stars and white dwarfs, makes them excellent tools for studying gravity. Present and near-future pulsar surveys, especially those using the new generation of telescopes, will find more extreme binary systems and pulsars that are more precise `clocks'. These telescopes will also greatly improve the precision to which we can measure the arrival times of the pulses. The Square Kilometre Array will revolutionize pulsar searches and timing precision. The increased number of sources will reveal rare sources, including possibly a pulsar-black hole binary, which can provide the most stringent tests of strong-field gravity. The improved timing precision will reveal new phenomena and also allow us to make a detection of gravitational waves in the nanohertz frequency regime. It is here where we expect to see the signature of the binary black holes that are formed as galaxies merge throughout cosmological history. This article is part of a discussion meeting issue `The promises of gravitational-wave astronomy'.
Truss Assembly and Welding by Intelligent Precision Jigging Robots
NASA Technical Reports Server (NTRS)
Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus
2014-01-01
This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.
Liu, Jen-Pei; Lu, Li-Tien; Liao, C T
2009-09-01
Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Non-solenoidal startup and low-β operations in Pegasus
NASA Astrophysics Data System (ADS)
Schlossberg, D. J.; Battaglia, D. J.; Bongard, M. W.; Fonck, R. J.; Redd, A. J.
2009-11-01
Non-solenoidal startup using point-source DC helicity injectors (plasma guns) has been achieved in the Pegasus Toroidal Experiment for plasmas with Ip in excess of 100 kA using Iinj<4,A. The maximum achieved Ip tentatively scales as √ITFIinj/w, where w is the radial thickness of the gun-driven edge. The Ip limits appear to conform to a simple stationary model involving helicity conservation and Taylor relaxation. However, observed MHD activity reveals the additional dynamics of the relaxation process, evidenced by intermittent bursts of n=1 activity correlated with rapid redistribution of the current channel. Recent upgrades to the gun system provide higher helicity injection rates, smaller w, a more constrained gun current path, and more precise diagnostics. Experimental goals include extending parametric scaling studies, determining the conditions where parallel conduction losses dominate the helicity dissipation, and building the physics understanding of helicity injection to confidently design gun systems for larger, future tokamaks.
Quantifying Uncertainties in Navigation and Orbit Propagation Analyses
NASA Technical Reports Server (NTRS)
Krieger, Andrew W.; Welch, Bryan W.
2017-01-01
A tool used to calculate dilution of precision (DOP) was created in order to assist the Space Communications and Navigation (SCaN) program to analyze current and future user missions. The SCaN Center for Engineering, Networks, Integration, and Communication (SCENIC) is developing a new user interface (UI) to augment and replace the capabilities of currently used commercial software, such as Systems Tool Kit (STK). The DOP tool will be integrated in the SCENIC UI and will be used to analyze the accuracy of navigation solutions. This tool was developed using MATLAB and free and open-source tools to save cost and to use already existing orbital software libraries. GPS DOP data was collected and used for validation purposes. The similarities between the DOP tool results and GPS data show that the DOP tool is performing correctly. Additional improvements can be made in the DOP tool to improve its accuracy and performance in analyzing navigation solutions.
Possible Nuclear Safeguards Applications: Workshop on Next-Generation Laser Compton Gamma Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durham, J. Matthew
2016-11-17
These are a set of slides for the development of a next-generation photon source white paper. The following topics are covered in these slides: Nuclear Safeguards; The Nuclear Fuel Cycle; Precise isotopic determination via NRF; UF 6 Enrichment Assay; and Non-Destructive Assay of Spent Nuclear Fuel. In summary: A way to non-destructively measure precise isotopics of ~kg and larger samples has multiple uses in nuclear safeguards; Ideally this is a compact, fieldable device that can be used by international inspectors. Must be rugged and reliable; A next-generation source can be used as a testing ground for these techniques as technologymore » develops.« less
Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon
2017-01-01
Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today’s ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources. PMID:28067288
NASA Astrophysics Data System (ADS)
Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon
2017-01-01
Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today’s ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources.
Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon
2017-01-09
Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today's ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources.
Precision medicine: In need of guidance and surveillance.
Lin, Jian-Zhen; Long, Jun-Yu; Wang, An-Qiang; Zheng, Ying; Zhao, Hai-Tao
2017-07-28
Precision medicine, currently a hotspot in mainstream medicine, has been strongly promoted in recent years. With rapid technological development, such as next-generation sequencing, and fierce competition in molecular targeted drug exploitation, precision medicine represents an advance in science and technology; it also fulfills needs in public health care. The clinical translation and application of precision medicine - especially in the prevention and treatment of tumors - is far from satisfactory; however, the aims of precision medicine deserve approval. Thus, this medical approach is currently in its infancy; it has promising prospects, but it needs to overcome numbers of problems and deficiencies. It is expected that in addition to conventional symptoms and signs, precision medicine will define disease in terms of the underlying molecular characteristics and other environmental susceptibility factors. Those expectations should be realized by constructing a novel data network, integrating clinical data from individual patients and personal genomic background with existing research on the molecular makeup of diseases. In addition, multi-omics analysis and multi-discipline collaboration will become crucial elements in precision medicine. Precision medicine deserves strong support, and its development demands directed momentum. We propose three kinds of impetus (research, application and collaboration impetus) for such directed momentum toward promoting precision medicine and accelerating its clinical translation and application.
Precision medicine: In need of guidance and surveillance
Lin, Jian-Zhen; Long, Jun-Yu; Wang, An-Qiang; Zheng, Ying; Zhao, Hai-Tao
2017-01-01
Precision medicine, currently a hotspot in mainstream medicine, has been strongly promoted in recent years. With rapid technological development, such as next-generation sequencing, and fierce competition in molecular targeted drug exploitation, precision medicine represents an advance in science and technology; it also fulfills needs in public health care. The clinical translation and application of precision medicine - especially in the prevention and treatment of tumors - is far from satisfactory; however, the aims of precision medicine deserve approval. Thus, this medical approach is currently in its infancy; it has promising prospects, but it needs to overcome numbers of problems and deficiencies. It is expected that in addition to conventional symptoms and signs, precision medicine will define disease in terms of the underlying molecular characteristics and other environmental susceptibility factors. Those expectations should be realized by constructing a novel data network, integrating clinical data from individual patients and personal genomic background with existing research on the molecular makeup of diseases. In addition, multi-omics analysis and multi-discipline collaboration will become crucial elements in precision medicine. Precision medicine deserves strong support, and its development demands directed momentum. We propose three kinds of impetus (research, application and collaboration impetus) for such directed momentum toward promoting precision medicine and accelerating its clinical translation and application. PMID:28811702
NASA Astrophysics Data System (ADS)
Takahashi, Kazunori; Ando, Akira
2017-05-01
Individual measurements of forces exerted to an upstream back wall, a radial source wall, and a magnetic field of a helicon plasma thruster, which has two solenoids upstream and downstream of a radiofrequency antenna, are precisely measured. Two different structures of magnetic field lines in the source are tested, where the solenoid current is supplied to either only the downstream solenoid or to both the solenoids. It is observed that the high density plasma exists upstream of the rf antenna when both the solenoids are powered, while the maximum density exists near the rf antenna when only the downstream solenoid is powered. Although the force exerted to the back wall is increased for the two solenoids case, the axial momentum lost to the radial wall is simultaneously enhanced; then the total force exerted to the whole structure of the thruster is found to be very similar for the two magnetic field configurations. It is shown that the individual force measurement provides useful information on the plasma momentum interacting with the physical boundaries and the magnetic fields.
Götz, Th I; Lahmer, G; Strnad, V; Bert, Ch; Hensel, B; Tomé, A M; Lang, E W
2017-01-01
During High Dose Rate Brachytherapy (HDR-BT) the spatial position of the radiation source inside catheters implanted into a female breast is determined via electromagnetic tracking (EMT). Dwell positions and dwell times of the radiation source are established, relative to the patient's anatomy, from an initial X-ray-CT-image. During the irradiation treatment, catheter displacements can occur due to patient movements. The current study develops an automatic analysis tool of EMT data sets recorded with a solenoid sensor to assure concordance of the source movement with the treatment plan. The tool combines machine learning techniques such as multi-dimensional scaling (MDS), ensemble empirical mode decomposition (EEMD), singular spectrum analysis (SSA) and particle filter (PF) to precisely detect and quantify any mismatch between the treatment plan and actual EMT measurements. We demonstrate that movement artifacts as well as technical signal distortions can be removed automatically and reliably, resulting in artifact-free reconstructed signals. This is a prerequisite for a highly accurate determination of any deviations of dwell positions from the treatment plan.
Unrecognized astrometric confusion in the Galactic Centre
NASA Astrophysics Data System (ADS)
Plewa, P. M.; Sari, R.
2018-06-01
The Galactic Centre is a crowded stellar field and frequent unrecognized events of source confusion, which involve undetected faint stars, are expected to introduce astrometric noise on a sub-mas level. This confusion noise is the main non-instrumental effect limiting the astrometric accuracy and precision of current near-infrared imaging observations and the long-term monitoring of individual stellar orbits in the vicinity of the central supermassive black hole. We self-consistently simulate the motions of the known and the yet unidentified stars to characterize this noise component and show that a likely consequence of source confusion is a bias in estimates of the stellar orbital elements, as well as the inferred mass and distance of the black hole, in particular if stars are being observed at small projected separations from it, such as the star S2 during pericentre passage. Furthermore, we investigate modelling the effect of source confusion as an additional noise component that is time-correlated, demonstrating a need for improved noise models to obtain trustworthy estimates of the parameters of interest (and their uncertainties) in future astrometric studies.
Lahmer, G.; Strnad, V.; Bert, Ch.; Hensel, B.; Tomé, A. M.; Lang, E. W.
2017-01-01
During High Dose Rate Brachytherapy (HDR-BT) the spatial position of the radiation source inside catheters implanted into a female breast is determined via electromagnetic tracking (EMT). Dwell positions and dwell times of the radiation source are established, relative to the patient’s anatomy, from an initial X-ray-CT-image. During the irradiation treatment, catheter displacements can occur due to patient movements. The current study develops an automatic analysis tool of EMT data sets recorded with a solenoid sensor to assure concordance of the source movement with the treatment plan. The tool combines machine learning techniques such as multi-dimensional scaling (MDS), ensemble empirical mode decomposition (EEMD), singular spectrum analysis (SSA) and particle filter (PF) to precisely detect and quantify any mismatch between the treatment plan and actual EMT measurements. We demonstrate that movement artifacts as well as technical signal distortions can be removed automatically and reliably, resulting in artifact-free reconstructed signals. This is a prerequisite for a highly accurate determination of any deviations of dwell positions from the treatment plan. PMID:28934238
Electron cyclotron resonance ion source experience at the Heidelberg Ion Beam Therapy Centera)
NASA Astrophysics Data System (ADS)
Winkelmann, T.; Cee, R.; Haberer, T.; Naas, B.; Peters, A.; Scheloske, S.; Spädtke, P.; Tinschert, K.
2008-02-01
Radiotherapy with heavy ions is an upcoming cancer treatment method with to date unparalleled precision. It associates higher control rates particularly for radiation resistant tumor species with reduced adverse effects compared to conventional photon therapy. The accelerator beam lines and structures of the Heidelberg Ion Beam Therapy Center (HIT) have been designed under the leadership of GSI, Darmstadt with contributions of the IAP Frankfurt. Currently, the accelerator is under commissioning, while the injector linac has been completed. When the patient treatment begins in 2008, HIT will be the first medical heavy ion accelerator in Europe. This presentation will provide an overview about the project, with special attention given to the 14.5GHz electron cyclotron resonance (ECR) ion sources in operation with carbon, hydrogen, helium, and oxygen, and the experience of one year of continuous operation. It also displays examples for beam emittances, measured in the low energy beam transport. In addition to the outlook of further developments at the ECR ion sources for a continuously stable operation, this paper focuses on some of the technical processings of the past year.
Beekman, Madeleine; Doyen, Laurent; Oldroyd, Benjamin P
2005-12-01
Honey bee foragers communicate the direction and distance of both food sources and new nest sites to nest mates by means of a symbolic dance language. Interestingly, the precision by which dancers transfer directional information is negatively correlated with the distance to the advertised food source. The 'tuned-error' hypothesis suggests that colonies benefit from this imprecision as it spreads recruits out over a patch of constant size irrespective of the distance to the advertised site. An alternative to the tuned-error hypothesis is that dancers are physically incapable of dancing with great precision for nearby sources. Here we revisit the tuned-error hypothesis by studying the change in dance precision with increasing foraging distance over relatively short distances while controlling for environmental influences. We show that bees indeed increase their dance precision with the increase in foraging distance. However, we also show that dance performed by swarm-scouts for a nearby (30 m) nest site, where there could be no benefit to imprecision, are either without or with only limited directional information. This result suggests that imprecision in dance communication is caused primarily by physical constraints in the ability of dancers to turn around quickly enough when the advertised site is nearby.
Comparison of parameters affecting GNP-loaded choroidal melanoma dosimetry; Monte Carlo study
NASA Astrophysics Data System (ADS)
Sharabiani, Marjan; Asadi, Somayeh; Barghi, Amir Rahnamai; Vaezzadeh, Mehdi
2018-04-01
The current study reports the results of tumor dosimetry in the presence of gold nanoparticles (GNPs) with different sizes and concentrations. Due to limited number of works carried out on the brachytherapy of choroidal melanoma in combination with GNPs, this study was performed to determine the optimum size and concentration for GNPs which contributes the highest dose deposition in tumor region, using two phantom test cases namely water phantom and a full Monte Carlo model of human eye. Both water and human eye phantoms were simulated with MCNP5 code. Tumor dosimetry was performed for a typical point photon source with an energy of 0.38 MeV as a high energy source and 103Pd brachytherapy source with an average energy of 0.021 MeV as a low energy source in water phantom and eye phantom respectively. Such a dosimetry was done for different sizes and concentrations of GNPs. For all of the diameters, increase in concentration of GNPs resulted in an increase in dose deposited in the region of interest. In a certain concentration, GNPs with larger diameters contributed more dose to the tumor region, which was more pronounced using eye phantom. 100 nm was reported as the optimum size in order to achieve the highest energy deposition within the target. This work investigated the optimum parameters affecting macroscopic dose enhancement in GNP-aided brachytherapy of choroidal melanoma. The current work also had implications on using low energy photon sources in the presence of GNPs to acquire the highest dose enhancement. This study is conducted through four different sizes and concentrations of GNPs. Considering the sensitivity of human eye tissue, in order to report the precise optimum parameters affecting radiosensitivity, a comprehensive study on a wide range of sizes and concentrations are required.
Diagnostics for a 1.2 kA, 1 MeV, electron induction injector
NASA Astrophysics Data System (ADS)
Houck, T. L.; Anderson, D. E.; Eylon, S.; Henestroza, E.; Lidia, S. M.; Vanecek, D. L.; Westenskow, G. A.; Yu, S. S.
1998-12-01
We are constructing a 1.2 kA, 1 MeV, electron induction injector as part of the RTA program, a collaborative effort between LLNL and LBNL to develop relativistic klystrons for Two-Beam Accelerator applications. The RTA injector will also be used in the development of a high-gradient, low-emittance, electron source and beam diagnostics for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) Facility. The electron source will be a 3.5″-diameter, thermionic, flat-surface, m-type cathode with a maximum shroud field stress of approximately 165 kV/cm. Additional design parameters for the injector include a pulse length of over 150 ns flat top (1% energy variation), and a normalized edge emittance of less than 200 π-mm-mr. Precise measurement of the beam parameters is required so that performance of the RTA injector can be confidently scaled to the 4 kA, 3 MeV, and 2-microsecond pulse parameters of the DARHT injector. Planned diagnostics include an isolated cathode with resistive divider for direct measurement of current emission, resistive wall and magnetic probe current monitors for measuring beam current and centroid position, capacitive probes for measuring A-K gap voltage, an energy spectrometer, and a pepperpot emittance diagnostic. Details of the injector, beam line, and diagnostics are presented.
Improving the frequency precision of oscillators by synchronization.
Cross, M C
2012-04-01
Improving the frequency precision by synchronizing a lattice of N oscillators with disparate frequencies is studied in the phase reduction limit. In the general case where the coupling is not purely dissipative the synchronized state consists of targetlike waves radiating from a local source, which is a region of higher-frequency oscillators. In this state the improvement of the frequency precision is shown to be independent of N for large N, but instead depends on the disorder and reflects the dependence of the frequency of the synchronized state on just those oscillators in the source region of the waves. These results are obtained by a mapping of the nonlinear phase dynamics onto the linear Anderson problem of the quantum mechanics of electrons on a random lattice in the tight-binding approximation.
Small animal radiotherapy research platforms
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Granton, Patrick; Tryggestad, Erik
2011-06-01
Advances in conformal radiation therapy and advancements in pre-clinical radiotherapy research have recently stimulated the development of precise micro-irradiators for small animals such as mice and rats. These devices are often kilovolt x-ray radiation sources combined with high-resolution CT imaging equipment for image guidance, as the latter allows precise and accurate beam positioning. This is similar to modern human radiotherapy practice. These devices are considered a major step forward compared to the current standard of animal experimentation in cancer radiobiology research. The availability of this novel equipment enables a wide variety of pre-clinical experiments on the synergy of radiation with other therapies, complex radiation schemes, sub-target boost studies, hypofractionated radiotherapy, contrast-enhanced radiotherapy and studies of relative biological effectiveness, to name just a few examples. In this review we discuss the required irradiation and imaging capabilities of small animal radiation research platforms. We describe the need for improved small animal radiotherapy research and highlight pioneering efforts, some of which led recently to commercially available prototypes. From this, it will be clear that much further development is still needed, on both the irradiation side and imaging side. We discuss at length the need for improved treatment planning tools for small animal platforms, and the current lack of a standard therein. Finally, we mention some recent experimental work using the early animal radiation research platforms, and the potential they offer for advancing radiobiology research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less
DataMed - an open source discovery index for finding biomedical datasets.
Chen, Xiaoling; Gururaj, Anupama E; Ozyurt, Burak; Liu, Ruiling; Soysal, Ergin; Cohen, Trevor; Tiryaki, Firat; Li, Yueling; Zong, Nansu; Jiang, Min; Rogith, Deevakar; Salimi, Mandana; Kim, Hyeon-Eui; Rocca-Serra, Philippe; Gonzalez-Beltran, Alejandra; Farcas, Claudiu; Johnson, Todd; Margolis, Ron; Alter, George; Sansone, Susanna-Assunta; Fore, Ian M; Ohno-Machado, Lucila; Grethe, Jeffrey S; Xu, Hua
2018-01-13
Finding relevant datasets is important for promoting data reuse in the biomedical domain, but it is challenging given the volume and complexity of biomedical data. Here we describe the development of an open source biomedical data discovery system called DataMed, with the goal of promoting the building of additional data indexes in the biomedical domain. DataMed, which can efficiently index and search diverse types of biomedical datasets across repositories, is developed through the National Institutes of Health-funded biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium. It consists of 2 main components: (1) a data ingestion pipeline that collects and transforms original metadata information to a unified metadata model, called DatA Tag Suite (DATS), and (2) a search engine that finds relevant datasets based on user-entered queries. In addition to describing its architecture and techniques, we evaluated individual components within DataMed, including the accuracy of the ingestion pipeline, the prevalence of the DATS model across repositories, and the overall performance of the dataset retrieval engine. Our manual review shows that the ingestion pipeline could achieve an accuracy of 90% and core elements of DATS had varied frequency across repositories. On a manually curated benchmark dataset, the DataMed search engine achieved an inferred average precision of 0.2033 and a precision at 10 (P@10, the number of relevant results in the top 10 search results) of 0.6022, by implementing advanced natural language processing and terminology services. Currently, we have made the DataMed system publically available as an open source package for the biomedical community. © The Author 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Jia, Min; Chew, Wade M; Feinstein, Yelena; Skeath, Perry; Sternberg, Esther M
2016-03-21
Cortisol has long been recognized as the "stress biomarker" in evaluating stress related disorders. Plasma, urine or saliva are the current source for cortisol analysis. The sampling of these biofluids is either invasive or has reliability problems that could lead to inaccurate results. Sweat has drawn increasing attention as a promising source for non-invasive stress analysis. A sensitive HPLC-MS/MS method was developed for the quantitation of cortisol ((11β)-11,17,21-trihydroxypregn-4-ene-3,20-dione) in human eccrine sweat. At least one unknown isomer that has previously not been reported and could potentially interfere with quantification was separated from cortisol with mixed mode RP HPLC. Detection of cortisol was carried out using atmospheric pressure chemical ionization (APCI) and selected reaction monitoring (SRM) in positive ion mode, using cortisol-9,11,12,12-D4 as internal standard. LOD and LOQ were estimated to be 0.04 ng ml(-1) and 0.1 ng ml(-1), respectively. Linear range of 0.10-25.00 ng ml(-1) was obtained. Intraday precision (2.5%-9.7%) and accuracy (0.5%-2.1%), interday precision (12.3%-18.7%) and accuracy (7.1%-15.1%) were achieved. This method has been successfully applied to the cortisol analysis of human eccrine sweat samples. This is the first demonstration that HPLC-MS/MS can be used for the sensitive and highly specific determination of cortisol in human eccrine sweat in the presence of at least one isomer that has similar hydrophobicity as cortisol. This study demonstrated that human eccrine sweat could be used as a promising source for non-invasive assessment of stress biomarkers such as cortisol and other steroid hormones.
Klein-Fedyshin, Michele; Ketchum, Andrea M; Arnold, Robert M; Fedyshin, Peter J
2014-12-01
MEDLINE offers the Core Clinical Journals filter to limit to clinically useful journals. To determine its effectiveness for searching and patient-centric decision making, this study compared literature used for Morning Report in Internal Medicine with journals in the filter. An EndNote library with references answering 327 patient-related questions during Morning Report from 2007 to 2012 was exported to a file listing variables including designated Core Clinical Journal, Impact Factor, date used and medical subject. Bradford's law of scattering was applied ranking the journals and reflecting their clinical utility. Recall (sensitivity) and precision of the Core Morning Report journals and non-Core set was calculated. This study applied bibliometrics to compare the 628 articles used against these criteria to determine journals impacting decision making. Analysis shows 30% of clinically used articles are from the Core Clinical Journals filter and 16% of the journals represented are Core titles. When Bradford-ranked, 55% of the top 20 journals are Core. Articles <5 years old furnish 63% of sources used. Among the 63 Morning Report subjects, 55 have <50% precision and 41 have <50% recall including 37 subjects with 0% precision and 0% recall. Low usage of publications within the Core Clinical Journals filter indicates less relevance for hospital-based care. The divergence from high-impact medicine titles suggests clinically valuable journals differ from academically important titles. With few subjects demonstrating high recall or precision, the MEDLINE Core Clinical Journals filter may require a review and update to better align with current clinical needs. © 2014 John Wiley & Sons, Ltd.
Weighing Rocky Exoplanets with Improved Radial Velocimetry
NASA Astrophysics Data System (ADS)
Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium
2016-01-01
The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.
NASA Astrophysics Data System (ADS)
Shu, D.; Liu, W.; Kearney, S.; Anton, J.; Tischler, J. Z.
2015-09-01
The 3-D X-ray diffraction microscope is a new nondestructive tool for the three-dimensional characterization of mesoscopic materials structure. A flexural-pivot-based precision linear stage has been designed to perform a wire scan as a differential aperture for the 3-D diffraction microscope at the Advanced Photon Source, Argonne National Laboratory. The mechanical design and finite element analyses of the flexural stage, as well as its initial mechanical test results with laser interferometer are described in this paper.
A Guide for Collecting Seismic, Acoustic, and Magnetic Data for Multiple Uses
1975-01-01
time, simul- (- taneously for the analog technique or sequentially for the digital tech- nique. Both methods require that precision timing networks be...with a precise voltage proportional to the sensitivity of the magnetometer . Whenever any electronic equip- ment affecting cal.bration has to be replaced...described as precisely as possible, including (but not limited to) the following: a. Name of source b. Continuous or transient c. Distance from geophone
[Estimation of desert vegetation coverage based on multi-source remote sensing data].
Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui
2012-12-01
Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.
ERIC Educational Resources Information Center
National Alliance of Business, Inc., Washington, DC.
CertainTeed's Precision Strike training program was designed to close the gaps between the current status of its workplace and where that work force needed to be to compete successfully in global markets. Precision Strike included Skills and Knowledge in Lifelong Learning (SKILL) customized, computerized lessons in basic skills, one-on-one…
NASA Astrophysics Data System (ADS)
Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.
2017-12-01
Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and multichannel explorations were carried out in the marine environment and the accuracy of the modeling system was verified by comparatively analyzing the exploration data and the numerical modeling data acquired.
Modeling of a Compact Terahertz Source based on the Two-Stream Instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svimonishvili, Tengiz
2016-05-17
THz radiation straddles the microwave and infrared bands of the electromagnetic spectrum, thus combining the penetrating power of lower-frequency waves and imaging capabilities of higher-energy infrared radiation. THz radiation is employed in various elds such as cancer research, biology, agriculture, homeland security, and environmental monitoring. Conventional vacuum electronic sources of THz radiation (e.g., fast- and slow-wave devices) either require very small structures or are bulky and expensive to operate. Optical sources necessitate cryogenic cooling and are presently capable of producing milliwatt levels of power at THz frequencies. We propose a millimeter and sub-millimeter wave source based on a well-known phenomenonmore » called the two-stream instability. The two-beam source relies on lowenergy and low-current electron beams for operation. Also, it is compact, simple in design, and does not contain expensive parts that require complex machining and precise alignment. In this dissertation, we perform 2-D particle-in-cell (PIC) simulations of the interaction region of the two-beam source. The interaction region consists of a beam pipe of radius ra and two electron beams of radius rb co-propagating and interacting inside the pipe. The simulations involve the interaction of unmodulated (no initial energy modulation) and modulated (energy-modulated, seeded at a given frequency) electron beams. In addition, both cold (monoenergetic) and warm (Gaussian) beams are treated.« less
NASA Astrophysics Data System (ADS)
Singh, U. N.; Refaat, T. F.; Ismail, S.; Davis, K. J.; Kawa, S. R.; Menzies, R. T.; Petros, M.; Yu, J.
2016-12-01
Carbon dioxide (CO2) is recognized as the most important anthropogenic greenhouse gas. While CO2 concentration is rapidly increasing, understanding of the global carbon cycle remains a primary scientific challenge. This is mainly due to the lack of full characterization of CO2 sources and sinks. Quantifying the current global distribution of CO2 sources and sinks with sufficient accuracy and spatial resolution is a critical requirement for improving models of carbon-climate interactions and for attributing them to specific biogeochemical processes. This requires sustained atmospheric CO2 observations with high precision, and low bias for high accuracy, and spatial and temporal dense representation that cannot be fully realized with current CO2 observing systems, including existing satellite CO2 passive remote sensors. Progress in 2-micron instrument technologies, airborne testing, and system performance simulations indicates that the necessary lower tropospheric weighted CO2 measurements can be achieved from space using new high pulse energy 2-micron direct detection active remote sensing. Advantages of the CO2 active remote sensing include low bias measurements that are independent of sun light or Earth's radiation and day/night coverage over all latitudes and seasons. In addition, the direct detection system provides precise ranging with simultaneous measurement of aerosol and cloud distributions. The 2-micron active remote sensing offers strong CO2 absorption lines with optimum low tropospheric and near surface weighting. A feasibility study, including system optimization and sensitivity analysis of a space-based 2-micron pulsed IPDA lidar for CO2 measurement, is presented. This is based on the successful demonstration of the CO2 double-pulse IPDA lidar and the technology maturation of the triple-pulse IPDA lidar, currently under development at NASA Langley Research Center. Preliminary simulations indicate CO2 random measurement errors of 0.71, 0.35 and 0.13 ppm for snow, ocean surface, and desert surface reflectivity, respectively. These simulations assume a 400 km altitude polar orbit, 100 mJ pulse energy, a 1.5 m telescope, a 6.2 MHz detection bandwidth, 0.05 aerosol optical depth and 7 second data average.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
Web-based visualization of very large scientific astronomy imagery
NASA Astrophysics Data System (ADS)
Bertin, E.; Pillay, R.; Marmo, C.
2015-04-01
Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.
Askari, Sina; Zhang, Mo; Won, Deborah S
2010-01-01
Current methods for assessing the efficacy of treatments for Parkinson's disease (PD) rely on physician rated scores. These methods pose three major shortcomings: 1) the subjectivity of the assessments, 2) the lack of precision on the rating scale (6 discrete levels), and 3) the inability to assess symptoms except under very specific conditions and/or for very specific tasks. To address these shortcomings, a portable system was developed to continuously monitor Parkinsonian symptoms with quantitative measures based on electrical signals from muscle activity (EMG). Here, we present the system design and the implementation of methods for system validation. This system was designed to provide continuous measures of tremor, rigidity, and bradykinesia which are related to the neurophysiological source without the need for multiple bulky experimental apparatuses, thus allowing more precise, quantitative indicators of the symptoms which can be measured during practical daily living tasks. This measurement system has the potential to improve the diagnosis of PD as well as the evaluation of PD treatments, which is an important step in the path to improving PD treatments.
Development of Models for High Precision Simulation of the Space Mission Microscope
NASA Astrophysics Data System (ADS)
Bremer, Stefanie; List, Meike; Selig, Hanns; Lämmerzahl, Claus
MICROSCOPE is a French space mission for testing the Weak Equivalence Principle (WEP). The mission goal is the determination of the Eötvös parameter with an accuracy of 10-15. This will be achieved by means of two high-precision capacitive differential accelerometers, that are built by the French institute ONERA. At the German institute ZARM drop tower tests are carried out to verify the payload performance. Additionally, the mission data evaluation is prepared in close cooperation with the French partners CNES, ONERA and OCA. Therefore a comprehensive simulation of the real system including the science signal and all error sources is built for the development and testing of data reduction and data analysis algorithms to extract the WEP violation signal. Currently, the High Performance Satellite Dynamics Simulator (HPS), a cooperation project of ZARM and the DLR Institute of Space Systems, is adapted to the MICROSCOPE mission for the simulation of test mass and satellite dynamics. Models of environmental disturbances like solar radiation pressure are considered, too. Furthermore detailed modeling of the on-board capacitive sensors is done.
Relaxation of the composite Higgs little hierarchy
NASA Astrophysics Data System (ADS)
Batell, Brian; Fedderke, Michael A.; Wang, Lian-Tao
2017-12-01
We describe a composite Higgs scenario in which a cosmological relaxation mechanism naturally gives rise to a hierarchy between the weak scale and the scale of spontaneous global symmetry breaking. This is achieved through the scanning of sources of explicit global symmetry breaking by a relaxion field during an exponentially long period of inflation in the early universe. We explore this mechanism in detail in a specific composite Higgs scenario with QCD-like dynamics, based on an ultraviolet SU( N )TC `technicolor' confining gauge theory with three Dirac technifermion flavors. We find that we can successfully generate a hierarchy of scales ξ≡〈 h〉2/ F π 2 ≳ 1.2 × 10- 4 (i.e., compositeness scales F π ˜ 20 TeV) without tuning. This evades all current electroweak precision bounds on our (custodial violating) model. While directly observing the heavy composite states in this model will be challenging, a future electroweak precision measurement program can probe most of the natural parameter space for the model. We also highlight signatures of more general composite Higgs models in the cosmological relaxation framework, including some implications for flavor and dark matter.
The Visibility of Earth Transits
NASA Technical Reports Server (NTRS)
Castellano, Tim; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
The recent detection of planetary transits of the solar-like star HD 209458 at a distance of 47 parsecs suggest that transits can reveal the presence of Jupiter-size planetary companions in the solar neighborhood. Recent space-based transit searches have achieved photometric precision within an order of magnitude of that required to detect the much smaller transit signal of an earth-size planet around a solar-size star. Laboratory experiments in the presence of realistic noise sources have shown that CCDs can achieve photometric precision adequate to detect the 9.6 E-5 dimming, of the Sun due to a transit of the Earth. Space-based solar irradiance monitoring has shown that the intrinsic variability of the Sun would not preclude such a detection. Transits of the Sun by the Earth would be detectable by observers that reside within a narrow band of sky positions near the ecliptic plane, if the observers possess current Earth epoch levels of technology and astronomical expertise. A catalog of candidate target stars, their properties, and simulations of the photometric Earth transit signal detectability at each target is presented.
The Visibility of Earth Transits
NASA Technical Reports Server (NTRS)
Castellano, Timothy P.; Doyle, Laurance; McIntosh, Dawn; DeVincenzi, Donald (Technical Monitor)
2000-01-01
The recent photometric detection of planetary transits of the solar-like star HD 209458 at a distance of 47 parsecs suggest that transits can reveal the presence of Jupiter-size planetary companions in the solar neighborhood. Recent space-based transit searches have achieved photometric precision within an order of magnitude of that required to detect the much smaller transit signal of an earth-size planet across a solar-size star. Laboratory experiments in the presence of realistic noise sources have shown that CCDs can achieve photometric precision adequate to detect the 9.6 E-5 dimming of the Sun due to a transit of the Earth. Space-based solar irradiance monitoring has shown that the intrinsic variability of the Sun would not preclude such a detection. Transits of the Sun by the Earth would be detectable by observers that reside within a narrow band of sky positions near the ecliptic plane, if the observers possess current Earth epoch levels of technology and astronomical expertise. A catalog of solar-like stars that satisfy the geometric condition for Earth transit visibility are presented.
Silicon photodiode as a detector in the rocket-borne photometry of the near infrared airglow.
Schaeffer, R C
1976-11-01
The application of a silicon P-I-N photodiode to the dc measurement of low levels of near ir radiation is described. It is shown that the threshold of signal detection is set by the current amplifier voltage noise, the effect of which at the output is determined by the value of source resistance of the photodiode. The photodiode was used as the detector in a compact interference filter photometer designed for rocket-borne studies of the airglow. Flight results have proved the instrument's capability to provide measurements sufficiently precise to yield an accurate height profile of the (0-0) atmospheric band of O(2) night airglow at lambda762 nm.
Lessons from non-canonical splicing
Ule, Jernej
2016-01-01
Recent improvements in experimental and computational techniques used to study the transcriptome have enabled an unprecedented view of RNA processing, revealing many previously unknown non-canonical splicing events. This includes cryptic events located far from the currently annotated exons, and unconventional splicing mechanisms that have important roles in regulating gene expression. These non-canonical splicing events are a major source of newly emerging transcripts during evolution, especially when they involve sequences derived from transposable elements. They are therefore under precise regulation and quality control, which minimises their potential to disrupt gene expression. While non-canonical splicing can lead to aberrant transcripts that cause many diseases, we also explain how it can be exploited for new therapeutic strategies. PMID:27240813
Mobile mapping of methane emissions and isoscapes
NASA Astrophysics Data System (ADS)
Takriti, Mounir; Ward, Sue; Wynn, Peter; Elias, Dafydd; McNamara, Niall
2017-04-01
Methane (CH4) is a potent greenhouse gas emitted from a variety of natural and anthropogenic sources. It is crucial to accurately and efficiently detect CH4 emissions and identify their sources to improve our understanding of changing emission patterns as well as to identify ways to curtail their release into the atmosphere. However, using established methods this can be challenging as well as time and resource intensive due to the temporal and spatial heterogeneity of many sources. To address this problem, we have developed a vehicle mounted mobile system that combines high precision CH4 measurements with isotopic mapping and dual isotope source characterisation. We here present details of the development and testing of a unique system for the detection and isotopic analysis of CH4 plumes built around a Picarro isotopic (13C/12C) gas analyser and a high precision Los Gatos greenhouse gas analyser. Combined with micrometeorological measurements and a mechanism for collecting discrete samples for high precision dual isotope (13C/12C, 2H/1H) analysis the system enables mapping of concentrations as well as directional and isotope based source verification. We then present findings from our mobile methane surveys around the North West of England. This area includes a variety of natural and anthropogenic methane sources within a relatively small geographical area, including livestock farming, urban and industrial gas infrastructure, landfills and waste water treatment facilities, and wetlands. We show that the system was successfully able to locate leaks from natural gas infrastructure and emissions from agricultural activities and to distinguish isotope signatures from these sources.
NASA Astrophysics Data System (ADS)
Sánchez, Daniel; Nieh, James C.; Hénaut, Yann; Cruz, Leopoldo; Vandame, Rémy
Several studies have examined the existence of recruitment communication mechanisms in stingless bees. However, the spatial accuracy of location-specific recruitment has not been examined. Moreover, the location-specific recruitment of reactivated foragers, i.e., foragers that have previously experienced the same food source at a different location and time, has not been explicitly examined. However, such foragers may also play a significant role in colony foraging, particularly in small colonies. Here we report that reactivated Scaptotrigona mexicana foragers can recruit with high precision to a specific food location. The recruitment precision of reactivated foragers was evaluated by placing control feeders to the left and the right of the training feeder (direction-precision tests) and between the nest and the training feeder and beyond it (distance-precision tests). Reactivated foragers arrived at the correct location with high precision: 98.44% arrived at the training feeder in the direction trials (five-feeder fan-shaped array, accuracy of at least +/-6° of azimuth at 50 m from the nest), and 88.62% arrived at the training feeder in the distance trials (five-feeder linear array, accuracy of at least +/-5 m or +/-10% at 50 m from the nest). Thus, S. mexicana reactivated foragers can find the indicated food source at a specific distance and direction with high precision, higher than that shown by honeybees, Apis mellifera, which do not communicate food location at such close distances to the nest.
Osmium isotopes demonstrate distal transport of contaminated sediments in Chesapeake Bay
Helz, G.R.; Adelson, J.M.; Miller, C.V.; Cornwell, J.C.; Hill, J.M.; Horan, M.; Walker, R.J.
2000-01-01
Because the isotopic composition of anthropogenic Os is normally distinctive in comparison to continental crust and is precisely measurable, this platinum-group element is attractive as a tracer of transport pathways for contaminated sediments in estuaries. Evidence herein and elsewhere suggest that biomedical research institutions are the chief source of anthropogenic Os. In the Chesapeake Bay region, uncontaminated sediments bear a crustal 187Os/188Os signature of 0.73 ?? 0.10. Slightly higher 187Os/188Os ratios occur in Re-rich Coastal Plain deposits due to post- Miocene 187Re decay. The upper Susquehanna Basin yields sediments also with higher 187Os/188Os. Beginning in the late 1970s, this signal was overprinted by a low 187Os/188Os (anthropogenic) source in the lower Susquehanna Basin. In the vicinity of Baltimore, which is a major center of heavy industry as well as biomedical research, anthropogenic Os has been found only in sediments impacted by the principal wastewater treatment plant. Surprisingly, a mid-Bay site distant from anthropogenic sources contains the strongest anthropogenic Os signal in the data set, having received anthropogenic Os sporadically since the mid-20th Century. Transport of particles to this site overrode the northward flowing bottom currents. Finding anthropogenic Os at this site cautions that other particle-borne substances, including hazardous ones, could be dispersed broadly in this estuary.Because the isotopic composition of anthropogenic Os is normally distinctive in comparison to continental crust and is precisely measurable, this platinum-group element is attractive as a tracer of transport pathways for contaminated sediments in estuaries. Evidence herein and elsewhere suggest that biomedical research institutions are the chief source of anthropogenic Os. In the Chesapeake Bay region, uncontaminated sediments bear a crustal 187Os/188Os signature of 0.73 ?? 0.10. Slightly higher 187Os/188Os ratios occur in Re-rich Coastal Plain deposits due to post-Miocene 187Re decay. The upper Susquehanna Basin yields sediments also with higher 187Os/188Os. Beginning in the late 1970s, this signal was overprinted by a low 187Os/188Os (anthropogenic) source in the lower Susquehanna Basin. In the vicinity of Baltimore, which is a major center of heavy industry as well as biomedical research, anthropogenic Os has been found only in sediments impacted by the principal wastewater treatment plant. Surprisingly, a mid-Bay site distant from anthropogenic sources contains the strongest anthropogenic Os signal in the data set, having received anthropogenic Os sporadically since the mid-20th Century. Transport of particles to this site overrode the northward flowing bottom currents. Finding anthropogenic Os at this site cautions that other particle-borne substances, including hazardous ones, could be dispersed broadly in this estuary.
Saltabayeva, Ulbosin; Garib, Victoria; Morenko, Marina; Rosenson, Rafail; Ispayeva, Zhanat; Gatauova, Madina; Zulus, Loreta; Karaulov, Alexander; Gastager, Felix; Valenta, Rudolf
2017-01-01
Background Allergen molecule-based diagnosis has been suggested to facilitate the identification of disease-causing allergen sources and the prescription of allergen-specific immunotherapy (AIT). The aim of the current study was to compare allergen molecule-based IgE serology with allergen extract-based skin testing for the identification of the disease-causing allergen sources. The study was conducted in an area where patients are exposed to pollen from multiple sources (trees, grasses, and weeds) at the same time to compare the diagnostic efficiency of the 2 forms of diagnosis. Methods Patients from Astana, Kazakhstan, who suffered from pollen-induced allergy (n = 95) were subjected to skin prick testing (SPT) with a local panel of tree pollen, grass pollen, and weed pollen allergen extracts and IgE antibodies specific for marker allergen molecules (nArt v 1, nArt v 3, rAmb a 1, rPhl p 1, rPhl p 5, rBet v 1) were measured by ImmunoCAP. Direct and indirect costs for diagnosis based on SPT and marker allergen-based IgE serology as well as direct costs for immunotherapy depending on SPT and serological test results were calculated. Results The costs for SPT-based diagnosis per patient were lower than the costs for allergen molecule-based IgE serology. However, allergen molecule-based serology was more precise in detecting the disease-causing allergen sources. A lower number of immunotherapy treatments (n = 119) was needed according to molecular diagnosis as compared to extract-based diagnosis (n = 275), which considerably reduced the total costs for diagnosis and for a 3-year treatment from EUR 1,112.30 to 521.77 per patient. Conclusions The results from this real-life study show that SPT is less expensive than allergen molecule-based diagnostic testing, but molecular diagnosis allowed more precise prescription of immunotherapy which substantially reduced treatment costs and combined costs for diagnosis and treatment. PMID:28654920
NASA Astrophysics Data System (ADS)
Turner, Alexander J.; Jacob, Daniel J.; Benmergui, Joshua; Brandman, Jeremy; White, Laurent; Randles, Cynthia A.
2018-06-01
Anthropogenic methane emissions originate from a large number of fine-scale and often transient point sources. Satellite observations of atmospheric methane columns are an attractive approach for monitoring these emissions but have limitations from instrument precision, pixel resolution, and measurement frequency. Dense observations will soon be available in both low-Earth and geostationary orbits, but the extent to which they can provide fine-scale information on methane sources has yet to be explored. Here we present an observation system simulation experiment (OSSE) to assess the capabilities of different satellite observing system configurations. We conduct a 1-week WRF-STILT simulation to generate methane column footprints at 1.3 × 1.3 km2 spatial resolution and hourly temporal resolution over a 290 × 235 km2 domain in the Barnett Shale, a major oil and gas field in Texas with a large number of point sources. We sub-sample these footprints to match the observing characteristics of the recently launched TROPOMI instrument (7 × 7 km2 pixels, 11 ppb precision, daily frequency), the planned GeoCARB instrument (2.7 × 3.0 km2 pixels, 4 ppb precision, nominal twice-daily frequency), and other proposed observing configurations. The information content of the various observing systems is evaluated using the Fisher information matrix and its eigenvalues. We find that a week of TROPOMI observations should provide information on temporally invariant emissions at ˜ 30 km spatial resolution. GeoCARB should provide information available on temporally invariant emissions ˜ 2-7 km spatial resolution depending on sampling frequency (hourly to daily). Improvements to the instrument precision yield greater increases in information content than improved sampling frequency. A precision better than 6 ppb is critical for GeoCARB to achieve fine resolution of emissions. Transient emissions would be missed with either TROPOMI or GeoCARB. An aspirational high-resolution geostationary instrument with 1.3 × 1.3 km2 pixel resolution, hourly return time, and 1 ppb precision would effectively constrain the temporally invariant emissions in the Barnett Shale at the kilometer scale and provide some information on hourly variability of sources.
ERIC Educational Resources Information Center
Reid, Robert L.; And Others
This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…
NASA Astrophysics Data System (ADS)
Krawczynski, M.; McLean, N.
2017-12-01
One of the most accurate and useful ways of determining the age of rocks that formed more than about 500,000 years ago is uranium-lead (U-Pb) geochronology. Earth scientists use U-Pb geochronology to put together the geologic history of entire regions and of specific events, like the mass extinction of all non-avian dinosaurs about 66 million years ago or the catastrophic eruptions of supervolcanoes like the one currently centered at Yellowstone. The mineral zircon is often utilized because it is abundant, durable, and readily incorporates uranium into its crystal structure. But it excludes thorium, whose isotope 230Th is part of the naturally occurring isotopic decay chain from 238U to 206Pb. Calculating a date from the relative abundances of 206Pb and 238U therefore requires a correction for the missing 230Th. Existing experimental and observational constraints on the way U and Th behave when zircon crystallizes from a melt are not known precisely enough, and thus currently the uncertainty in dates introduced by they `Th correction' is one of the largest sources of systematic error in determining dates. Here we present preliminary results on our study of actinide partitioning between zircon and melt. Experiments have been conducted to grow zircon from melts doped with U and Th that mimic natural magmas at a range of temperatures, and compositions. Synthetic zircons are separated from their coexisting glass and using high precision and high-spatial-resolution techniques, the abundance and distribution of U and Th in each phase is determined. These preliminary experiments are the beginning of a study that will result in precise determination of the zircon/melt uranium and thorium partition coefficients under a wide variety of naturally occurring conditions. This data will be fit to a multidimensional surface using maximum likelihood regression techniques, so that the ratio of partition coefficients can be calculated for any set of known parameters. The results of this study will reduce the largest source of uncertainty in dating young zircons and improve the accuracy of U-Pb dates, improving our ability to tell time during geologic processes. The attainment of more accurate timing of the geologic timescale is important to geologists of all disciplines, from paleontology to planetary cosmochemistry to geobiology.
2002-12-01
applications, vibration sources are numerous such as: ! Launch Loading ! Man-induced accelerations like on the Shuttle or space station ! Solar ...However, the lack of significant tracking errors during times when other actuators were stationary, and the fact that the local maximum tracking...
Attending to Precision with Secret Messages
ERIC Educational Resources Information Center
Starling, Courtney; Whitacre, Ian
2016-01-01
Mathematics is a language that is characterized by words and symbols that have precise definitions. Many opportunities exist for miscommunication in mathematics if the words and symbols are interpreted incorrectly or used in imprecise ways. In fact, it is found that imprecision is a common source of mathematical disagreements and misunderstandings…
Happel, Max F K; Jeschke, Marcus; Ohl, Frank W
2010-08-18
Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.
Simultaneous Mass Determination for Gravitationally Coupled Asteroids
NASA Astrophysics Data System (ADS)
Baer, James; Chesley, Steven R.
2017-08-01
The conventional least-squares asteroid mass determination algorithm allows us to solve for the mass of a large subject asteroid that is perturbing the trajectory of a smaller test asteroid. However, this algorithm is necessarily a first approximation, ignoring the possibility that the subject asteroid may itself be perturbed by the test asteroid, or that the encounter’s precise geometry may be entangled with encounters involving other asteroids. After reviewing the conventional algorithm, we use it to calculate the masses of 30 main-belt asteroids. Compared to our previous results, we find new mass estimates for eight asteroids (11 Parthenope, 27 Euterpe, 51 Neimausa, 76 Freia, 121 Hermione, 324 Bamberga, 476 Hedwig, and 532 Herculina) and significantly more precise estimates for six others (2 Pallas, 3 Juno, 4 Vesta, 9 Metis, 16 Psyche, and 88 Thisbe). However, we also find that the conventional algorithm yields questionable results in several gravitationally coupled cases. To address such cases, we describe a new algorithm that allows the epoch state vectors of the subject asteroids to be included as solve-for parameters, allowing for the simultaneous solution of the masses and epoch state vectors of multiple subject and test asteroids. We then apply this algorithm to the same 30 main-belt asteroids and conclude that mass determinations resulting from current and future high-precision astrometric sources (such as Gaia) should conduct a thorough search for possible gravitational couplings and account for their effects.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optimal Measurement Conditions for Spatiotemporal EEG/MEG Source Analysis.
ERIC Educational Resources Information Center
Huizenga, Hilde M.; Heslenfeld, Dirk J.; Molenaar, Peter C. M.
2002-01-01
Developed a method to determine the required number and position of sensors for human brain electromagnetic source analysis. Studied the method through a simulation study and an empirical study on visual evoked potentials in one adult male. Results indicate the method is fast and reliable and improves source precision. (SLD)
Sources of the Medical Vocabulary.
ERIC Educational Resources Information Center
Butler, Roy F.
1980-01-01
In an attempt to determine as precisely as possible just how much of medical vocabulary is derived from every source, the vocabulary defined in the 24th edition of "Dorland's Illustrated Medical Dictionary" was analyzed. Results indicate that medical vocabulary is relying increasingly upon the Greek and Latin languages as the sources of…
Proceedings of the Fourth Precise Time and Time Interval Planning Meeting
NASA Technical Reports Server (NTRS)
Acrivos, H. N. (Compiler); Wardrip, S. C. (Compiler)
1972-01-01
The proceedings of a conference on Precise Time and Time Interval Planning are presented. The subjects discussed include the following: (1) satellite timing techniques, precision frequency sources, and very long baseline interferometry, (2) frequency stabilities and communications, and (3) very low frequency and ultrahigh frequency propagation and use. Emphasis is placed on the accuracy of time discrimination obtained with time measuring equipment and specific applications of time measurement to military operations and civilian research projects.
The prospects of pulsar timing with new-generation radio telescopes and the Square Kilometre Array.
Stappers, B W; Keane, E F; Kramer, M; Possenti, A; Stairs, I H
2018-05-28
Pulsars are highly magnetized and rapidly rotating neutron stars. As they spin, the lighthouse-like beam of radio emission from their magnetic poles sweeps across the Earth with a regularity approaching that of the most precise clocks known. This precision combined with the extreme environments in which they are found, often in compact orbits with other neutron stars and white dwarfs, makes them excellent tools for studying gravity. Present and near-future pulsar surveys, especially those using the new generation of telescopes, will find more extreme binary systems and pulsars that are more precise 'clocks'. These telescopes will also greatly improve the precision to which we can measure the arrival times of the pulses. The Square Kilometre Array will revolutionize pulsar searches and timing precision. The increased number of sources will reveal rare sources, including possibly a pulsar-black hole binary, which can provide the most stringent tests of strong-field gravity. The improved timing precision will reveal new phenomena and also allow us to make a detection of gravitational waves in the nanohertz frequency regime. It is here where we expect to see the signature of the binary black holes that are formed as galaxies merge throughout cosmological history.This article is part of a discussion meeting issue 'The promises of gravitational-wave astronomy'. © 2018 The Author(s).
Advanced control of neutral beam injected power in DIII-D
Pawley, Carl J.; Crowley, Brendan J.; Pace, David C.; ...
2017-03-23
In the DIII-D tokamak, one of the most powerful techniques to control the density, temperature and plasma rotation is by eight independently modulated neutral beam sources with a total power of 20 MW. The rapid modulation requires a high degree of reproducibility and precise control of the ion source plasma and beam acceleration voltage. Recent changes have been made to the controls to provide a new capability to smoothly vary the beam current and beam voltage during a discharge, while maintaining the modulation capability. The ion source plasma inside the arc chamber is controlled through feedback from the Langmuir probesmore » measuring plasma density near the extraction end. To provide the new capability, the plasma control system (PCS) has been enabled to change the Langmuir probe set point and the beam voltage set point in real time. When the PCS varies the Langmuir set point, the plasma density is directly controlled in the arc chamber, thus changing the beam current (perveance) and power going into the tokamak. Alternately, the PCS can sweep the beam voltage set point by 20 kV or more and adjust the Langmuir probe setting to match, keeping the perveance constant and beam divergence at a minimum. This changes the beam power and average neutral particle energy, which changes deposition in the tokamak plasma. The ion separating magnetic field must accurately match the beam voltage to protect the beam line. To do this, the magnet current control accurately tracks the beam voltage set point. In conclusion, these new capabilities allow continuous in-shot variation of neutral beam ion energy to complement« less
On-sky calibration performance of a monolithic Michelson interferometer filtered source
NASA Astrophysics Data System (ADS)
Ge, Jian; Ma, Bo; Powell, Scott; Varosi, Frank; Schofield, Sidney; Grieves, Nolan; Liu, Jian
2014-07-01
In the new era of searching for Earth-like planets, new generation radial velocity (RV) high resolution spectrographs requires ~0.1 m/s Doppler calibration accuracy in the visible band and a similar calibration precision in the near infrared. The patented stable monolithic Michelson interferometer filtered source called the Sine source emerges as a very promising calibration device. This Sine source has the potential of covering the practical working wavelengths (~0.38- 2.5 μm) for Doppler measurements with high resolution optical and near infrared high resolution spectrographs at the ground-based telescopes. The single frame calibration precision can reach < 0.1 m/s for the state of the art spectrographs, and it can be easily designed to match the intrinsic sensitivities of future Doppler instruments. The Sine source also has the great practical advantages in compact (portable) size and low cost. Here we report early results from on-sky calibration of a Sine source measured with two state-of-the-art TOU optical high resolution spectrograph (R=100,000, 0.38-0.9 microns) and FIRST near infrared spectrograph (R=50,000, 0.8-1.8 microns) at a 2 meter robotic telescope at Fairborn Observatory in Arizona. The results with the TOU spectrograph monitoring over seven days show that the Sine source has produced ~3 times better calibration precision than the ThAr calibration (RMS = 2.7m/s vs. 7.4m/s) at 0.49-0.62 microns where calibration data have been processed by our preliminary data pipeline and ~1.4 times better than the iodine absorption spectra (RMS=3.6 m/s) at the same wavelength region. As both ThAr and Iodine have reached sub m/s calibration accuracy with existing Doppler instruments (such as HARPS and HIRES), it is likely that the sine source would provide similar improvement once a better data pipeline and an upgraded version of a Sine source are developed. It is totally possible to reach ~0.1 m/s in the optical wavelength region. In addition, this Sine source offers potential very accurate calibration at 0.7-0.9 μm where ThAr lines are totally dominated by strong and saturated Argon lines and the ThAr calibration data are nearly useless. The early measurements with the FIRST near infrared spectrograph show that this Sine source produces very homogenous fringe modulations over 0.8-1.8 μm which can potentially provide better precision than the UrNe lamp for instrument drift measurements.
Toward precision medicine in primary biliary cholangitis.
Carbone, Marco; Ronca, Vincenzo; Bruno, Savino; Invernizzi, Pietro; Mells, George F
2016-08-01
Primary biliary cholangitis is a chronic, cholestatic liver disease characterized by a heterogeneous presentation, symptomatology, disease progression and response to therapy. In contrast, clinical management and treatment of PBC is homogeneous with a 'one size fits all' approach. The evolving research landscape, with the emergence of the -omics field and the availability of large patient cohorts are creating a unique opportunity of translational epidemiology. Furthermore, several novel disease and symptom-modifying agents for PBC are currently in development. The time is therefore ripe for precision medicine in PBC. In this manuscript we describe the concept of precision medicine; review current approaches to risk-stratification in PBC, and speculate how precision medicine in PBC might develop in the near future. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Ion current as a precise measure of the loading rate of a magneto-optical trap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, W.; Bailey, K.; Lu, Z. -T.
2014-01-01
We have demonstrated that the ion current resulting from collisions between metastable krypton atoms in a magneto-optical trap can be used to precisely measure the trap loading rate. We measured both the ion current of the abundant isotope Kr-83 (isotopic abundance = 11%) and the single-atom counting rate of the rare isotope Kr-85 (isotopic abundance similar to 1 x 10(-11)), and found the two quantities to be proportional at a precision level of 0.9%. This work results in a significant improvement in using the magneto-optical trap as an analytical tool for noble-gas isotope ratio measurements, and will benefit both atomicmore » physics studies and applications in the earth sciences. (C) 2014 Optical Society of America« less
NASA Astrophysics Data System (ADS)
Jacobsen, Jurma; Edlich, Stefan
2009-02-01
There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.
Gręda, Krzysztof; Jamróz, Piotr; Pohl, Paweł
2013-04-15
A low power direct current atmospheric glow discharge sustained in the open to air atmosphere in contact with a small-sized flowing liquid cathode was used as an excitation source in optical emission spectrometry. The composition of electrolyte solutions served as the liquid cathode was modified by the addition of non-ionic surfactants, namely Triton x-45, Triton x-100, Triton x-405 and Triton x-705. The effect of the concentration of each surfactant was thoroughly studied on the emission characteristic of molecular bands identified in spectra, atomic emission lines of 16 metals studied and the background level. It was found that the presence of both heavy surfactants results in a significant increase in the net intensity of analytical lines of metals and a notable reduction of the intensity of bands of diatomic molecules and the background. In conditions considered to be a compromise for all metals, selected figures of merit for this excitation source combined with the optical emission spectrometry detection were determined. Limits of detection for all metals were within the range of 0.0003-0.05 mg L(-1), the precision was better than 6%, while calibration curves were linear over 2 orders of the magnitude of the concentration or more, e.g., for K, Li, Mg, Na and Rb. The discharge system with the liquid cathode modified by the addition of the surfactant found its application in the determination of Ca, Cu, Fe, K, Mg, Mn, Na and Zn in selected environmental samples, i.e., waters, soils and spruce needles, with the quite good precision and the accuracy comparable to that for measurements with flame atomic absorption spectrometry (FAAS) and flame atomic emission spectrometry (FAES). Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chang, En-Chih
2018-02-01
This paper presents a high-performance AC power source by applying robust stability control technology for precision material machining (PMM). The proposed technology associates the benefits of finite-time convergent sliding function (FTCSF) and firefly optimization algorithm (FOA). The FTCSF maintains the robustness of conventional sliding mode, and simultaneously speeds up the convergence speed of the system state. Unfortunately, when a highly nonlinear loading is applied, the chatter will occur. The chatter results in high total harmonic distortion (THD) output voltage of AC power source, and even deteriorates the stability of PMM. The FOA is therefore used to remove the chatter, and the FTCSF still preserves finite system-state convergence time. By combining FTCSF with FOA, the AC power source of PMM can yield good steady-state and transient performance. Experimental results are performed in support of the proposed technology.
Colman, John A.; Nogueira, Jacob I.; Pancorbo, Oscar C.; Batdorf, Carol A.; Block, Barbara A.
2015-01-01
Pacific bluefin tuna (Thunnus orientalis) have the largest home range of any tuna species and are well known for the capacity to make transoceanic migrations. We report the measurement of mercury (Hg) concentrations in wild Pacific bluefin tuna (PBFT), the first reported with known size-of-fish and capture location. The results indicate juvenile PBFT that are recently arrived in the California Current from the western Pacific Ocean have significantly higher Hg concentrations in white muscle (0.51 ug/g wet mass, wm) than PBFT of longer California Current residency (0.41 ug/g wm). These new arrivals are also higher in Hg concentration than PBFT in farm pens (0.43 ug/g wm) that were captured on arrival in the California Current and raised in pens on locally derived feed. Analysis by direct Hg analyzer and attention to Hg by tissue type and location on the fish allowed precise comparisons of mercury among wild and captive fish populations. Analysis of migration and nearshore residency, determined through extensive archival tagging, bioaccumulation models, trophic investigations, and potential coastal sources of methylmercury, indicates Hg bioaccumulation is likely greater for PBFT juvenile habitats in the western Pacific Ocean (East China Sea, Yellow Sea) than in the eastern Pacific Ocean (California Current). Differential bioaccumulation may be a trophic effect or reflect methylmercury availability, with potential sources for coastal China (large hypoxic continental shelf receiving discharge of three large rivers, and island-arc volcanism) different from those for coastal Baja California (small continental shelf, no large rivers, spreading-center volcanism).
[Progress in precision medicine: a scientific perspective].
Wang, B; Li, L M
2017-01-10
Precision medicine is a new strategy for disease prevention and treatment by taking into account differences in genetics, environment and lifestyles among individuals and making precise diseases classification and diagnosis, which can provide patients with personalized, targeted prevention and treatment. Large-scale population cohort studies are fundamental for precision medicine research, and could produce best evidence for precision medicine practices. Current criticisms on precision medicine mainly focus on the very small proportion of benefited patients, the neglect of social determinants for health, and the possible waste of limited medical resources. In spite of this, precision medicine is still a most hopeful research area, and would become a health care practice model in the future.
Sub-microradian Surface Slope Metrology with the ALS Developmental Long Trace Profiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V.; Barber, Samuel; Domning, Edward E.
2009-06-15
Development of X-ray optics for 3rd and 4th generation X-ray light sources with a level of surface slope precision of 0.1-0.2 {micro}rad requires the development of adequate fabrication technologies and dedicated metrology instrumentation and methods. Currently, the best performance of surface slope measurement has been achieved with the NOM (Nanometer Optical Component Measuring Machine) slope profiler at BESSY (Germany) [1] and the ESAD (Extended Shear Angle Difference) profiler at the PTB (Germany) [2]. Both instruments are based on electronic autocollimators (AC) precisely calibrated for the specific application [3] with small apertures of 2.5-5 mm in diameter. In the present work,more » we describe the design, initial alignment and calibration procedures, the instrumental control and data acquisition system, as well as the measurement performance of the Developmental Long Trace Profiler (DLTP) slope measuring instrument recently brought into operation at the Advanced Light Source (ALS) Optical Metrology Laboratory (OML). Similar to the NOM and ESAD, the DLTP is based on a precisely calibrated autocollimator. However, this is a reasonably low budget instrument used at the ALS OML for the development and testing of new measuring techniques and methods. Some of the developed methods have been implemented into the ALS LTP-II (slope measuring long trace profiler [4]) which was recently upgraded and has demonstrated a capability for 0.25 {micro}rad surface metrology [5]. Performance of the DLTP was verified via a number of measurements with high quality reference mirrors. A comparison with the corresponding results obtained with the world's best slope measuring instrument, the BESSY NOM, proves the accuracy of the DLTP measurements on the level of 0.1-0.2 {micro}rad depending on the curvature of a surface under test. The directions of future work to develop a surface slope measuring profiler with nano-radian performance are also discussed.« less
Mohan, Shalini V; Chang, Anne Lynn S
2014-06-01
Precision medicine and precision therapeutics is currently in its infancy with tremendous potential to improve patient care by better identifying individuals at risk for skin cancer and predict tumor responses to treatment. This review focuses on the Hedgehog signaling pathway, its critical role in the pathogenesis of basal cell carcinoma, and the emergence of targeted treatments for advanced basal cell carcinoma. Opportunities to utilize precision medicine are outlined, such as molecular profiling to predict basal cell carcinoma response to targeted therapy and to inform therapeutic decisions.
NASA Technical Reports Server (NTRS)
Kuzin, Alexander V.; Holmes, Michael L.; Behrouzjou, Roxana; Trumper, David L.
1994-01-01
The results of the analysis of the achievable disturbance attenuation to get an Angstrom motion control resolution and macroscopic travel in a precision magnetically-suspended motion control system are presented in this paper. Noise sources in the transducers, electronics, and mechanical vibrations are used to develop the control design.
Smith, Kate E; Shafer, Martin M; Weiss, Debora; Anderson, Henry A; Gorski, Patrick R
2017-05-01
Exposure to the neurotoxic element lead (Pb) continues to be a major human health concern, particularly for children in US urban settings, and the need for robust tools for assessment of exposure sources has never been greater. The latest generation of multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) instrumentation offers the capability of using Pb isotopic signatures as a tool for environmental source tracking in public health. We present a case where MC-ICPMS was applied to isotopically resolve Pb sources in human clinical samples. An adult male and his child residing in Milwaukee, Wisconsin, presented to care in August 2015 with elevated blood lead levels (BLLs) (>200 μg/dL for the adult and 10 μg/dL for the child). The adult subject is a gunshot victim who had multiple bullet fragments embedded in soft tissue of his thigh for approximately 10 years. This study compared the high-precision isotopic fingerprints (<1 ‰ 2σ external precision) of Pb in the adult's and child's whole blood (WB) to the following possible Pb sources: a surgically extracted bullet fragment, household paint samples and tap water, and a Pb water-distribution pipe removed from servicing a house in the same neighborhood. Pb in the bullet and adult WB were nearly isotopically indistinguishable (matching within 0.05-0.56 ‰), indicating that bullet fragments embedded in soft tissue could be the cause of both acute and chronic elevated blood Pb levels. Among other sources investigated, no single source dominated the child's exposure profile as reflected in the elevated BLL.
The tracking analysis in the Q-weak experiment
Pan, J.; Androic, D.; Armstrong, D. S.; ...
2016-11-21
Here, the Q-weak experiment at Jefferson Laboratory measured the parity violating asymmetry (Amore » $$_{PV}$$ ) in elastic electron-proton scattering at small momentum transfer squared (Q$$^{2}$$=0.025 (G e V/c)$$^{2}$$), with the aim of extracting the proton’s weak charge ( $${Q^p_W}$$ ) to an accuracy of 5 %. As one of the major uncertainty contribution sources to $${Q^p_W}$$ , Q$$^{2}$$ needs to be determined to ~1 % so as to reach the proposed experimental precision. For this purpose, two sets of high resolution tracking chambers were employed in the experiment, to measure tracks before and after the magnetic spectrometer. Data collected by the tracking system were then reconstructed with dedicated software into individual electron trajectories for experimental kinematics determination. The Q-weak kinematics and the analysis scheme for tracking data are briefly described here. The sources that contribute to the uncertainty of Q$$^{2}$$ are discussed, and the current analysis status is reported.« less
Data mining: childhood injury control and beyond.
Tepas, Joseph J
2009-08-01
Data mining is defined as the automatic extraction of useful, often previously unknown information from large databases or data sets. It has become a major part of modern life and is extensively used in industry, banking, government, and health care delivery. The process requires a data collection system that integrates input from multiple sources containing critical elements that define outcomes of interest. Appropriately designed data mining processes identify and adjust for confounding variables. The statistical modeling used to manipulate accumulated data may involve any number of techniques. As predicted results are periodically analyzed against those observed, the model is consistently refined to optimize precision and accuracy. Whether applying integrated sources of clinical data to inferential probabilistic prediction of risk of ventilator-associated pneumonia or population surveillance for signs of bioterrorism, it is essential that modern health care providers have at least a rudimentary understanding of what the concept means, how it basically works, and what it means to current and future health care.
Determination of the excess noise of avalanche photodiodes integrated in 0.35-μm CMOS technologies
NASA Astrophysics Data System (ADS)
Jukić, Tomislav; Brandl, Paul; Zimmermann, Horst
2018-04-01
The excess noise of avalanche photodiodes (APDs) integrated in a high-voltage (HV) CMOS process and in a pin-photodiode CMOS process, both with 0.35-μm structure sizes, is described. A precise excess noise measurement technique is applied using a laser source, a spectrum analyzer, a voltage source, a current meter, a cheap transimpedance amplifier, and a personal computer with a MATLAB program. In addition, usage for on-wafer measurements is demonstrated. The measurement technique is verified with a low excess noise APD as a reference device with known ratio k = 0.01 of the impact ionization coefficients. The k-factor of an APD developed in HV CMOS is determined more accurately than known before. In addition, it is shown that the excess noise of the pin-photodiode CMOS APD depends on the optical power for avalanche gains above 35 and that modulation doping can suppress this power dependence. Modulation doping, however, increases the excess noise.
Detecting gravity waves from binary black holes
NASA Technical Reports Server (NTRS)
Wahlquist, Hugo D.
1989-01-01
One of the most attractive possible sources of strong gravitational waves would be a binary system comprising massive black holes (BH). The gravitational radiation from a binary is an elliptically polarized, periodic wave which could be observed continuously - or at intervals whenever a detector was available. This continuity of the signal is certainly appealing compared to waiting for individual pulses from infrequent random events. It also has the advantage over pulses that continued observation can increase the signal-to-noise ratio almost indefinitely. Furthermore, this system is dynamically simple; the theory of the generation of the radiation is unambiguous; all characteristics of the signal can be precisely related to the dynamical parameters of the source. The current situation is that while there is no observational evidence as yet for the existence of massive binary BH, their formation is theoretically plausible, and within certain coupled constraints of mass and location, their existence cannot be observationally excluded. Detecting gravitational waves from these objects might be the first observational proof of their existence.
Contemporary State of the Elbrus Volcanic Center (The Northern Caucasus)
NASA Astrophysics Data System (ADS)
Milyukov, Vadim; Rogozhin, Eugeny; Gorbatikov, Andrey; Mironov, Alexey; Myasnikov, Andrey; Stepanova, Marina
2018-05-01
The Elbrus volcanic center is located in southern Russia on the northern slope of the main ridge of the Greater Caucasus. Current classifications define Elbrus as a dormant volcano that could become active even after millennia of quiescence. In this study, we use two new geophysical methods to assess the contemporary state of the Elbrus volcano. The first method is based on an evaluation of parameters of resonant modes "reemitted" by the resonant structure (i.e., volcanic chamber) in response to the excitation of a seismic impact and recorded by a precise laser interferometer-strainmeter. The second method is based on low-frequency microseismic sounding and allows determination of the deep structure of complicated geological objects. Our study locates the magma chamber at depths of 1-8 km and extended magma source at depths of 15-40 km beneath the Elbrus eastern summit. An unknown magmatic structure, comparable to the Elbrus magmatic structure but currently much colder, was also identified 50 km from Mt. Elbrus. Based on our analysis, we assess the Elbrus volcano to be currently in a quasi-stable state of thermodynamic equilibrium.
Quantum Theory of Superresolution for Incoherent Optical Imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.
Simmons, Michael; Singhal, Ayush; Lu, Zhiyong
2018-01-01
The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text — found in biomedical publications and clinical notes — is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine. PMID:27807747
Simmons, Michael; Singhal, Ayush; Lu, Zhiyong
2016-01-01
The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next-generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text-found in biomedical publications and clinical notes-is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine.
Chen, Guoli; Yang, Zhaohai; Eshleman, James R; Netto, George J; Lin, Ming-Tseh
2016-01-01
Precision medicine, a concept that has recently emerged and has been widely discussed, emphasizes tailoring medical care to individuals largely based on information acquired from molecular diagnostic testing. As a vital aspect of precision cancer medicine, targeted therapy has been proven to be efficacious and less toxic for cancer treatment. Colorectal cancer (CRC) is one of the most common cancers and among the leading causes for cancer related deaths in the United States and worldwide. By far, CRC has been one of the most successful examples in the field of precision cancer medicine, applying molecular tests to guide targeted therapy. In this review, we summarize the current guidelines for anti-EGFR therapy, revisit the roles of pathologists in an era of precision cancer medicine, demonstrate the transition from traditional "one test-one drug" assays to multiplex assays, especially by using next-generation sequencing platforms in the clinical diagnostic laboratories, and discuss the future perspectives of tumor heterogeneity associated with anti-EGFR resistance and immune checkpoint blockage therapy in CRC.
Prospects for Precision Neutrino Cross Section Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.
2016-01-28
The need for precision cross section measurements is more urgent now than ever before, given the central role neutrino oscillation measurements play in the field of particle physics. The definition of precision is something worth considering, however. In order to build the best model for an oscillation experiment, cross section measurements should span a broad range of energies, neutrino interaction channels, and target nuclei. Precision might better be defined not in the final uncertainty associated with any one measurement but rather with the breadth of measurements that are available to constrain models. Current experience shows that models are better constrainedmore » by 10 measurements across different processes and energies with 10% uncertainties than by one measurement of one process on one nucleus with a 1% uncertainty. This article describes the current status of and future prospects for the field of precision cross section measurements considering the metric of how many processes, energies, and nuclei have been studied.« less
Correction of Spatial Bias in Oligonucleotide Array Data
Lemieux, Sébastien
2013-01-01
Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083
Johnson, Raymond H.; DeWitt, Ed; Wirt, Laurie; Arnold, L. Rick; Horton, John D.
2011-01-01
The National Park Service (NPS) seeks additional information to better understand the source(s) of groundwater and associated groundwater flow paths to Montezuma Well in Montezuma Castle National Monument, central Arizona. The source of water to Montezuma Well, a flowing sinkhole in a desert setting, is poorly understood. Water emerges from the middle limestone facies of the lacustrine Verde Formation, but the precise origin of the water and its travel path are largely unknown. Some have proposed artesian flow to Montezuma Well through the Supai Formation, which is exposed along the eastern margin of the Verde Valley and underlies the Verde Formation. The groundwater recharge zone likely lies above the floor of the Verde Valley somewhere to the north or east of Montezuma Well, where precipitation is more abundant. Additional data from groundwater, surface water, and bedrock geology are required for Montezuma Well and the surrounding region to test the current conceptual ideas, to provide new details on the groundwater flow in the area, and to assist in future management decisions. The results of this research will provide information for long-term water resource management and the protection of water rights.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
Precision time distribution within a deep space communications complex
NASA Technical Reports Server (NTRS)
Curtright, J. B.
1972-01-01
The Precision Time Distribution System (PTDS) at the Golstone Deep Space Communications Complex is a practical application of existing technology to the solution of a local problem. The problem was to synchronize four station timing systems to a master source with a relative accuracy consistently and significantly better than 10 microseconds. The solution involved combining a precision timing source, an automatic error detection assembly and a microwave distribution network into an operational system. Upon activation of the completed PTDS two years ago, synchronization accuracy at Goldstone (two station relative) was improved by an order of magnitude. It is felt that the validation of the PTDS mechanization is now completed. Other facilities which have site dispersion and synchronization accuracy requirements similar to Goldstone may find the PTDS mechanization useful in solving their problem. At present, the two station relative synchronization accuracy at Goldstone is better than one microsecond.
Calibration of a DSSSD detector with radioactive sources
NASA Astrophysics Data System (ADS)
Guadilla, V.; Taín, J. L.; Agramunt, J.; Algora, A.; Domingo-Pardo, C.; Rubio, B.
2013-06-01
The energy calibration of a DSSSD is carried out with the spectra produced by a 207Bi conversion electron source, a 137Cs gamma source and a 239Pu/241Am/244Cm triple alpha source, as well as employing a precision pulse generator in the whole dynamic range. Multiplicity and coincidence of signals in different strips for the same event are also studied.
Guided mass spectrum labelling in atom probe tomography.
Haley, D; Choi, P; Raabe, D
2015-12-01
Atom probe tomography (APT) is a valuable near-atomic scale imaging technique, which yields mass spectrographic data. Experimental correctness can often pivot on the identification of peaks within a dataset, this is a manual process where subjectivity and errors can arise. The limitations of manual procedures complicate APT experiments for the operator and furthermore are a barrier to technique standardisation. In this work we explore the capabilities of computer-guided ranging to aid identification and analysis of mass spectra. We propose a fully robust algorithm for enumeration of the possible identities of detected peak positions, which assists labelling. Furthermore, a simple ranking scheme is developed to allow for evaluation of the likelihood of each possible identity being the likely assignment from the enumerated set. We demonstrate a simple, yet complete work-chain that allows for the conversion of mass-spectra to fully identified APT spectra, with the goal of minimising identification errors, and the inter-operator variance within APT experiments. This work chain is compared to current procedures via experimental trials with different APT operators, to determine the relative effectiveness and precision of the two approaches. It is found that there is little loss of precision (and occasionally gain) when participants are given computer assistance. We find that in either case, inter-operator precision for ranging varies between 0 and 2 "significant figures" (2σ confidence in the first n digits of the reported value) when reporting compositions. Intra-operator precision is weakly tested and found to vary between 1 and 3 significant figures, depending upon species composition levels. Finally it is suggested that inconsistencies in inter-operator peak labelling may be the largest source of scatter when reporting composition data in APT. Copyright © 2015 Elsevier B.V. All rights reserved.
Schneider, Frank; Bludau, Frederic; Clausen, Sven; Fleckenstein, Jens; Obertacke, Udo; Wenz, Frederik
2017-05-01
To the present date, IORT has been eye and hand guided without treatment planning and tissue heterogeneity correction. This limits the precision of the application and the precise documentation of the location and the deposited dose in the tissue. Here we present a set-up where we use image guidance by intraoperative cone beam computed tomography (CBCT) for precise online Monte Carlo treatment planning including tissue heterogeneity correction. An IORT was performed during balloon kyphoplasty using a dedicated Needle Applicator. An intraoperative CBCT was registered with a pre-op CT. Treatment planning was performed in Radiance using a hybrid Monte Carlo algorithm simulating dose in homogeneous (MCwater) and heterogeneous medium (MChet). Dose distributions on CBCT and pre-op CT were compared with each other. Spinal cord and the metastasis doses were evaluated. The MCwater calculations showed a spherical dose distribution as expected. The minimum target dose for the MChet simulations on pre-op CT was increased by 40% while the maximum spinal cord dose was decreased by 35%. Due to the artefacts on the CBCT the comparison between MChet simulations on CBCT and pre-op CT showed differences up to 50% in dose. igIORT and online treatment planning improves the accuracy of IORT. However, the current set-up is limited by CT artefacts. Fusing an intraoperative CBCT with a pre-op CT allows the combination of an accurate dose calculation with the knowledge of the correct source/applicator position. This method can be also used for pre-operative treatment planning followed by image guided surgery. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoegg, Edward D.; Marcus, R. Kenneth; Hager, George J.
RATIONALE: The field of highly accurate and precise isotope ratio (IR) analysis has been dominated by inductively coupled plasma and thermal ionization mass spectrometers. While these instruments are considered the gold standard for IR analysis, the International Atomic Energy Agency desires a field deployable instrument capable of accurately and precisely measuring U isotope ratios. METHODS: The proposed system interfaces the liquid sampling – atmospheric pressure glow discharge (LS-APGD) ion source with a high resolution Exactive Orbitrap mass spectrometer. With this experimental setup certified U isotope standards and unknown samples were analyzed. The accuracy and precision of the system were thenmore » determined. RESULTS: The LS-APGD /Exactive instrument measures a certified reference material of natural U (235U/238U = 0.007258) as 0.007041 with a relative standard deviation of 0.158% meeting the International Target Values for Uncertainty for the destructive analysis of U. Additionally, when three unknowns measured and compared to the results from an ICP multi collector instrument, there is no statistical difference between the two instruments.CONCLUSIONS: The LS-APGD / Orbitrap system, while still in the preliminary stages of development, offers highly accurate and precise IR analysis that suggest a paradigm shift in the world of IR analysis. Furthermore, the portability of the LS-APGD as an elemental ion source combined with the low overhead and small size of the Orbitrap suggest that the instrumentation is capable of being field deployable.With liquid sampling glow discharge-Orbitrap MS, isotope ratio and precision performance improves with rejection of concomitant ion species.« less
Samani, Mohsen Mosayebi; Mahnam, Amin; Hosseini, Nasrin
2014-04-01
Portable wireless neuro-stimulators have been developed to facilitate long-term cognitive and behavioral studies on the central nervous system in freely moving animals. These stimulators can provide precisely controllable input(s) to the nervous system, without distracting the animal attention with cables connected to its body. In this study, a low power backpack neuro-stimulator was developed for animal brain researches that can provides arbitrary stimulus waveforms for the stimulation, while it is small and light weight to be used for small animals including rats. The system consists of a controller that uses an RF link to program and activate a small and light microprocessor-based stimulator. A Howland current source was implemented to produce precise current controlled arbitrary waveform stimulations. The system was optimized for ultra-low power consumption and small size. The stimulator was first tested for its electrical specifications. Then its performance was evaluated in a rat experiment when electrical stimulation of medial longitudinal fasciculus induced circling behavior. The stimulator is capable of delivering programmed stimulations up to ± 2 mA with adjusting steps of 1 μA, accuracy of 0.7% and compliance of 6 V. The stimulator is 15 mm × 20 mm × 40 mm in size, weights 13.5 g without battery and consumes a total power of only 5.l mW. In the experiment, the rat could easily carry the stimulator and demonstrated the circling behavior for 0.1 ms current pulses of above 400 μA. The developed system has a competitive size and weight, whereas providing a wide range of operation and the flexibility of generating arbitrary stimulation patterns ideal for long-term experiments in the field of cognitive and neuroscience research.
Swarm: ESA's Magnetic Field Mission
NASA Astrophysics Data System (ADS)
Plank, G.; Floberghagen, R.; Menard, Y.; Haagmans, R.
2012-12-01
Swarm is the fifth Earth Explorer mission in ESA's Living Planet Programme, and is scheduled for launch in fall 2012. The objective of the Swarm mission is to provide the best-ever survey of the geomagnetic field and its temporal evolution using a constellation of three identical satellites. The mission shall deliver data that allow access to new insights into the Earth system by improved scientific understanding of the Earth's interior and near-Earth electromagnetic environment. After launch and triple satellite release at an initial altitude of about 490 km, a pair of the satellites will fly side-by-side with slowly decaying altitude, while the third satellite will be lifted to 530 km to complete the Swarm constellation. High-precision and high-resolution measurements of the strength, direction and variation of the magnetic field, complemented by precise navigation, accelerometer and electric field measurements, will provide the observations required to separate and model various sources of the geomagnetic field and near-Earth current systems. The mission science goals are to provide a unique view into Earth's core dynamics, mantle conductivity, crustal magnetisation, ionospheric and magnetospheric current systems and upper atmosphere dynamics - ranging from understanding the geodynamo to contributing to space weather. The scientific objectives and results from recent scientific studies will be presented. In addition the current status of the project, which is presently in the final stage of the development phase, will be addressed. A consortium of European scientific institutes is developing a distributed processing system to produce geophysical (Level 2) data products for the Swarm user community. The setup of the Swarm ground segment and the contents of the data products will be addressed. In case the Swarm satellites are already in orbit, a summary of the on-going mission operations activities will be given.
Laser Interferometry for Gravitational Wave Observation: LISA and LISA Pathfinder
NASA Technical Reports Server (NTRS)
Guzman, Felipe
2010-01-01
The Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA gravitational wave observatory in the frequency range of 0.1mHz-100mHz. This observation band is inaccessible to ground-based detectors due to the large ground motions of the Earth. Gravitational wave sources for LISA include galactic binaries, mergers of supermasive black-hole binaries, extreme-mass-ratio inspirals, and possibly from as yet unimagined sources. LISA is a constellation of three spacecraft separated by 5 million km in an equilateral triangle, whose center follows the Earth in a heliocentric orbit with an orbital phase offset oF 20 degrees. Challenging technology is required to ensure pure geodetic trajectories of the six onboard test masses, whose distance fluctuations will be measured by interspacecraft laser interferometers with picometer accuracy. LISA Pathfinder is an ESA-launched technology demonstration mission of key LISA subsystems such us spacecraft control with micro-newton thrusters, test mass drag-free control, and precision laser interferometry between free-flying test masses. Ground testing of flight hardware of the Gravitational Reference Sensor and Optical Metrology subsystems of LISA Pathfinder is currently ongoing. An introduction to laser interferometric gravitational wave detection, ground-based observatories, and a detailed description of the two missions together with an overview of current investigations conducted by the community will bc discussed. The current status in development and implementation of LISA Pathfinder pre-flight systems and latest results of the ongoing ground testing efforts will also be presented
Demonstration of an ethane spectrometer for methane source identification.
Yacovitch, Tara I; Herndon, Scott C; Roscioli, Joseph R; Floerchinger, Cody; McGovern, Ryan M; Agnese, Michael; Pétron, Gabrielle; Kofler, Jonathan; Sweeney, Colm; Karion, Anna; Conley, Stephen A; Kort, Eric A; Nähle, Lars; Fischer, Marc; Hildebrandt, Lars; Koeth, Johannes; McManus, J Barry; Nelson, David D; Zahniser, Mark S; Kolb, Charles E
2014-07-15
Methane is an important greenhouse gas and tropospheric ozone precursor. Simultaneous observation of ethane with methane can help identify specific methane source types. Aerodyne Ethane-Mini spectrometers, employing recently available mid-infrared distributed feedback tunable diode lasers (DFB-TDL), provide 1 s ethane measurements with sub-ppb precision. In this work, an Ethane-Mini spectrometer has been integrated into two mobile sampling platforms, a ground vehicle and a small airplane, and used to measure ethane/methane enhancement ratios downwind of methane sources. Methane emissions with precisely known sources are shown to have ethane/methane enhancement ratios that differ greatly depending on the source type. Large differences between biogenic and thermogenic sources are observed. Variation within thermogenic sources are detected and tabulated. Methane emitters are classified by their expected ethane content. Categories include the following: biogenic (<0.2%), dry gas (1-6%), wet gas (>6%), pipeline grade natural gas (<15%), and processed natural gas liquids (>30%). Regional scale observations in the Dallas/Fort Worth area of Texas show two distinct ethane/methane enhancement ratios bridged by a transitional region. These results demonstrate the usefulness of continuous and fast ethane measurements in experimental studies of methane emissions, particularly in the oil and natural gas sector.
Spectral studies of cosmic X-ray sources
NASA Astrophysics Data System (ADS)
Blissett, R. J.
1980-01-01
The conventional "indirect" method of reduction and data analysis of spectral data from non-dispersive X-ray detectors, by the fitting of assumed spectral models, is examined. The limitations of this procedure are presented, and alternative schemes are considered in which the derived spectra are not biased to an astrophysical source model. A new method is developed in detail to directly restore incident photon spectra from the detected count histograms. This Spectral Restoration Technique allows an increase in resolution, to a degree dependent on the statistical precision of the data. This is illustrated by numerical simulations. Proportional counter data from Ariel 5 are analysed using this technique. The results obtained for the sources Cas A and the Crab Nebula are consistent with previous analyses and show that increases in resolution of up to a factor three are possible in practice. The source Cyg X-3 is closely examined. Complex spectral variability is found, with the continuum and iron-line emission modulated with the 4.8 hour period of the source. The data suggest multi-component emission in the source. Comparing separate Ariel 5 observations and published data from other experiments, a correlation between the spectral shape and source intensity is evident. The source behaviour is discussed with reference to proposed source models. Data acquired by the low-energy detectors on-board HEAO-1 are analysed using the Spectral Restoration Technique. This treatment explicitly demonstrates the existence of oxygen K-absorption edges in the soft X-ray spectra of the Crab Nebula and Sco X-1. These results are considered with reference to current theories of the interstellar medium. The thesis commences with a review of cosmic X-ray sources and the mechanisms responsible for their spectral signatures, and continues with a discussion of the instruments appropriate for spectral studies in X-ray astronomy.
Precision medicine needs pioneering clinical bioinformaticians.
Gómez-López, Gonzalo; Dopazo, Joaquín; Cigudosa, Juan C; Valencia, Alfonso; Al-Shahrour, Fátima
2017-10-25
Success in precision medicine depends on accessing high-quality genetic and molecular data from large, well-annotated patient cohorts that couple biological samples to comprehensive clinical data, which in conjunction can lead to effective therapies. From such a scenario emerges the need for a new professional profile, an expert bioinformatician with training in clinical areas who can make sense of multi-omics data to improve therapeutic interventions in patients, and the design of optimized basket trials. In this review, we first describe the main policies and international initiatives that focus on precision medicine. Secondly, we review the currently ongoing clinical trials in precision medicine, introducing the concept of 'precision bioinformatics', and we describe current pioneering bioinformatics efforts aimed at implementing tools and computational infrastructures for precision medicine in health institutions around the world. Thirdly, we discuss the challenges related to the clinical training of bioinformaticians, and the urgent need for computational specialists capable of assimilating medical terminologies and protocols to address real clinical questions. We also propose some skills required to carry out common tasks in clinical bioinformatics and some tips for emergent groups. Finally, we explore the future perspectives and the challenges faced by precision medicine bioinformatics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Micro-combs: A novel generation of optical sources
NASA Astrophysics Data System (ADS)
Pasquazi, Alessia; Peccianti, Marco; Razzari, Luca; Moss, David J.; Coen, Stéphane; Erkintalo, Miro; Chembo, Yanne K.; Hansson, Tobias; Wabnitz, Stefan; Del'Haye, Pascal; Xue, Xiaoxiao; Weiner, Andrew M.; Morandotti, Roberto
2018-01-01
The quest towards the integration of ultra-fast, high-precision optical clocks is reflected in the large number of high-impact papers on the topic published in the last few years. This interest has been catalysed by the impact that high-precision optical frequency combs (OFCs) have had on metrology and spectroscopy in the last decade [1-5]. OFCs are often referred to as optical rulers: their spectra consist of a precise sequence of discrete and equally-spaced spectral lines that represent precise marks in frequency. Their importance was recognised worldwide with the 2005 Nobel Prize being awarded to T.W. Hänsch and J. Hall for their breakthrough in OFC science [5]. They demonstrated that a coherent OFC source with a large spectrum - covering at least one octave - can be stabilised with a self-referenced approach, where the frequency and the phase do not vary and are completely determined by the source physical parameters. These fully stabilised OFCs solved the challenge of directly measuring optical frequencies and are now exploited as the most accurate time references available, ready to replace the current standard for time. Very recent advancements in the fabrication technology of optical micro-cavities [6] are contributing to the development of OFC sources. These efforts may open up the way to realise ultra-fast and stable optical clocks and pulsed sources with extremely high repetition-rates, in the form of compact and integrated devices. Indeed, the fabrication of high-quality factor (high-Q) micro-resonators, capable of dramatically amplifying the optical field, can be considered a photonics breakthrough that has boosted not only the scientific investigation of OFC sources [7-13] but also of optical sensors and compact light modulators [6,14]. In this framework, the demonstration of planar high-Q resonators, compatible with silicon technology [10-14], has opened up a unique opportunity for these devices to provide entirely new capabilities for photonic-integrated technologies. Indeed, it is well acknowledged by the electronics industry that future generations of computer processing chips will inevitably require an extremely high density of copper-based interconnections, significantly increasing the chip power dissipation to beyond practical levels [15-17]; hence, conventional approaches to chip design must undergo radical changes. On-chip optical networks, or optical interconnects, can offer high speed and low energy per-transferred-bit, and micro-resonators are widely seen as a key component to interface the electronic world with photonics. Many information technology industries have recently focused on the development of integrated ring resonators to be employed for electrically-controlled light modulators [14-17], greatly advancing the maturity of micro-resonator technology as a whole. Recently [11-13], the demonstration of OFC sources in micro-resonators fabricated in electronic (i.e. in complementary metal oxide semiconductor (CMOS)) compatible platforms has given micro-cavities an additional appeal, with the possibility of exploiting them as light sources in microchips. This scenario is creating fierce competition in developing highly efficient OFC generators based on micro-cavities which can radically change the nature of information transport and processing. Even in telecommunications, perhaps a more conventional environment for optical technologies, novel time-division multiplexed optical systems will require extremely stable optical clocks at ultra-high pulse repetition-rates towards the THz scale. Furthermore, arbitrary pulse generators based on OFC [18,19] are seen as one of the most promising solutions for this next generation of high-capacity optical coherent communication systems. This review will summarise the recent exciting achievements in the field of micro-combs, namely optical frequency combs based on high-Q micro-resonators, with a perspective on both the potential of this technology, as well as the open questions and challenges that remain.
Understanding the amplitudes of noise correlation measurements
Tsai, Victor C.
2011-01-01
Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work.
NASA Technical Reports Server (NTRS)
1988-01-01
Macrodyne, Inc.'s laser velocimeter (LV) is a system used in wind tunnel testing of aircraft, missiles and spacecraft employing electro optical techniques to probe the flow field as the tunnel blows air over a model of flight vehicle and to determine velocity of air and its direction at many points around the model. However, current state-of-the-art minicomputers cannot handle the massive flow of real time data from several sources simultaneously. Langley developed instrument Laser Velocimeter Autocovariance Buffer Interface (LVABI). LVABI is interconnecting instrument between LV and computer. It acquires data from as many as six LV channels at high real time data rates, stores it in memory and sends it to computer on command. LVABI has application in variety of research, industrial and defense functions requiring precise flow measurement.
Liquid Biopsy for Cancer: Circulating Tumor Cells, Circulating Free DNA or Exosomes?
Zhang, Wei; Xia, Wenjie; Lv, Zhengye; Ni, Chao; Xin, Yin; Yang, Liu
2017-01-01
Precision medicine and personalized medicine are based on the development of biomarkers, and liquid biopsy has been reported to be able to detect biomarkers that carry information on tumor development and progression. Compared with traditional 'solid biopsy', which cannot always be performed to determine tumor dynamics, liquid biopsy has notable advantages in that it is a noninvasive modality that can provide diagnostic and prognostic information prior to treatment, during treatment and during progression. In this review, we describe the source, characteristics, technology for detection and current situation of circulating tumor cells, circulating free DNA and exosomes used for diagnosis, recurrence monitoring, prognosis assessment and medication planning. © 2017 The Author(s)Published by S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Crisp, David
2008-01-01
The Orbiting Carbon Observatory (OCO) and the Greenhouse Gases Observing Satellite (GOSAT) are the first two satellites designed to make global measurements of atmospheric carbon dioxide (CO2) with the precision and sampling needed identify and monitor surface sources and sinks of this important greenhouse gas. Because the operational phases of the OCO and GOSAT missions overlap in time, there are numerous opportunities for comparing and combining the data from these two satellites to improve our understanding of the natural processes and human activities that control the atmospheric CO2 and it variability over time. Opportunities for cross-calibration, cross-validation, and coordinated observations that are currently under consideration are summarized here.
Crystalline multiwall carbon nanotubes and their application as a field emission electron source.
Liu, Peng; Zhou, Duanliang; Zhang, Chunhai; Wei, Haoming; Yang, Xinhe; Wu, Yang; Li, Qingwei; Liu, Changhong; Du, Bingchu; Liu, Liang; Jiang, Kaili; Fan, Shoushan
2018-05-18
Using super-aligned carbon nanotube (CNT) film, we have fabricated van der Waals crystalline multiwall CNTs (MWCNT) by adopting high pressure and high temperature processing. The CNTs keep parallel to each other and are distributed uniformly. X-ray diffraction characterization shows peaks at the small angle range, which can be assigned to the spacing of the MWCNT crystals. The mechanical, electrical and thermal properties are all greatly improved compared with the original CNT film. The field emission properties of van der Waals crystalline MWCNTs are tested and they show a better surface morphology stability for the large emission current. We have further fabricated a field emission x-ray tube and demonstrated a precise resolution imaging ability.
Meinlschmidt, Gunther; Tegethoff, Marion
2017-08-01
Background: The science and practice of psychotherapy is continuously developing. The goal of this article is to describe new impulses, guiding current advancements in the field. Methods: This paper provides a selective narrative review, synthesizing and condensing relevant literature identified through various sources, including MEDLINE, EMBASE, PsycINFO, and "Web of Science", as well as citation tracking, to elaborate key developments in the field of psychotherapy Results: We describe several dynamics: 1) Following up the so-called "third wave of cognitive behavioral therapy", new interventions arise that have at their core fostering interpersonal virtues, such as compassion, forgiveness, and gratitude; 2) Based on technological quantum leaps, new interventions arise that exploit current developments in the field of new media, information, and communication technologies, as well as brain imaging, such as digital interventions for mental disorders and new forms of neurofeedback; 3) Inspired by the field of positive psychology, there is a revival of the promotion of strength and resilience in therapeutic contexts; 4) In light of the new paradigm "precision medicine", the issue of differential and adaptive indication of psychotherapy, addressed with new methods, regains relevance and drives a new field of "precision psychotherapy". 5) Last but not least, the "embodied turn" opens the door for body psychotherapy to gain relevance in academic psychotherapy. Conclusion: These and further developments, such as the use of systemic and network approaches as well as machine learning techniques, outline the vivid activities in the field of psychotherapy. Georg Thieme Verlag KG Stuttgart · New York.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Peak Determination of ^35 Cl(n,γ) Lines
NASA Astrophysics Data System (ADS)
Young, Patrick
2004-10-01
To achieve our goal of performing a stringent test of the Isobaric Multiplet Mass Equation (IMME) for the lowest T = 2 quintuplet, an accurate measurement of the mass of ^32 S in its lowest T = 2 state is needed, as the other masses of the members of the quintuplet are well known [1]. To achieve the desired precision, several calibration reactions are required including ^35 Cl(n,γ). A proton beam of 1.912 MeV is incident upon a Li 2 O target to create neutrons via ^7Li(p,n). The neutrons are then moderated and absorbed by a volume of NaCl. The resulting radiation is measured with a Ge(Li) detector. Due to differences in the position of the source during calibration versus data runs, a source of mis-calibration may result from detector orientation to and distance from the source [2]. We are currently measuring the centroid shifts with respect to detector angle to see its influence upon our data collection. [1] K. Blaum, G. Audi et. al, Phys. Rev. Lett., 91, 260801, (2003) [2] R. G. Helmer, R. J. Gehrke, R. C. Greenwood, Nucl. Instr. and Meth., 123 (1975) p. 51-59.
Developments in high-precision gamma-ray burst source studies
NASA Technical Reports Server (NTRS)
Cline, T. L.
1982-01-01
The source location data analyzed by the first and second interplanetary gamma ray burst spacecraft networks are reviewed. The possibilities of additional networks and of related studies in other disciplines, and the prospects for real time optical transient observations and for the definition of gamma ray burst sources by optical transient astronomy are also reviewed.
Smith, Winchell
1971-01-01
Current-meter measurements of high accuracy will be required for calibration of an acoustic flow-metering system proposed for installation in the Sacramento River at Chipps Island in California. This report presents an analysis of the problem of making continuous accurate current-meter measurements in this channel where the flow regime is changing constantly in response to tidal action. Gaging-system requirements are delineated, and a brief description is given of the several applicable techniques that have been developed by others. None of these techniques provides the accuracies required for the flowmeter calibration. A new system is described--one which has been assembled and tested in prototype and which will provide the matrix of data needed for accurate continuous current-meter measurements. Analysis of a large quantity of data on the velocity distribution in the channel of the Sacramento River at Chipps Island shows that adequate definition of the velocity can be made during the dominant flow periods--that is, at times other than slack-water periods--by use of current meters suspended at elevations 0.2 and 0.8 of the depth below the water surface. However, additional velocity surveys will be necessary to determine whether or not small systematic corrections need be applied during periods of rapidly changing flow. In the proposed system all gaged parameters, including velocities, depths, position in the stream, and related times, are monitored continuously as a boat moves across the river on the selected cross section. Data are recorded photographically and transferred later onto punchcards for computer processing. Computer programs have been written to permit computation of instantaneous discharges at any selected time interval throughout the period of the current meter measurement program. It is anticipated that current-meter traverses will be made at intervals of about one-half hour over periods of several days. Capability of performance for protracted periods was, consequently, one of the important elements in system design. Analysis of error sources in the proposed system indicates that errors in individual computed discharges can be kept smaller than 1.5 percent if the expected precision in all measured parameters is maintained.
Morgan, Gilberto; Aftimos, Philippe; Awada, Ahmad
2016-09-01
Precision oncology has been a strategy of prevention, screening, and treatment. Although much has been invested, have the results fallen so far short of the promise? The advancement of technology and research has opened new doors, yet a variety of pitfalls are present. This review presents the successes, failures, and opportunities of precision oncology in the current landscape. The use of targeted gene sequencing and the overwhelming results of superresponders have generated much excitement and support for precision oncology from the medical community. Despite notable successes, many challenges still pave the way of precision oncology: intratumoral heterogeneity, the need for serial biopsies, availability of treatments, target prioritization, ethical issues with germline incidental findings, medical education, clinical trial design, and costs. Precision oncology shows much potential through the use of next-generation sequencing and molecular advances, but does this potential warrant the investment? There are many obstacles on the way of this technology that should make us question if the investment (both monetary and man-hours) will live up to the promise. The review aims to not criticize this technology, but to give a realistic view of where we are, especially regarding cancer treatment and prevention.
Scott, Jessica A; Hoffmeister, Robert J
2018-04-01
Academic English is an essential literacy skill area for success in post-secondary education and in many work environments. Despite its importance, academic English is understudied with deaf and hard of hearing (DHH) students. Nascent research in this area suggests that academic English, alongside American Sign Language (ASL) fluency, may play an important role in the reading proficiency of DHH students in middle and high school. The current study expands this research to investigate academic English by examining student proficiency with a sub-skill of academic writing called superordinate precision, the taxonomical categorization of a term. Currently there is no research that examines DHH students' proficiency with superordinate precision. Middle and high school DHH students enrolled in bilingual schools for the deaf were assessed on their ASL proficiency, academic English proficiency, reading comprehension, and use of superordinate precision in definitions writing. Findings indicate that student use of superordinate precision in definitions writing was correlated with ASL proficiency, reading comprehension, and academic English proficiency. It is possible that degree of mastery of superordinate precision may indicate a higher overall level of proficiency with academic English. This may have important implications for assessment of and instruction in academic English literacy.
Mass spectrometric measurements of the isotopic anatomies of molecules (Invited)
NASA Astrophysics Data System (ADS)
Eiler, J. M.; Krumwiede, D.; Schlueter, H.
2013-12-01
Site-specific and multiple isotopic substitutions in molecular structures potentially provide an extraordinarily rich set of constraints on their sources, conditions of formation, reaction and transport histories, and perhaps other issues. Examples include carbonate ';clumped isotope' thermometry, clumped isotope measurements of CO2, O2, and, recently, methane, ethane and N2O; site-specific 15N measurements in N2O and 13C and D analyses of fatty acids, sugars, cellulose, food products, and, recently, n-alkanes. Extension of the principles behind these tools to the very large number of isotopologues of complex molecules could potentially lead to new uses of isotope chemistry, similar to proteomics, metabolomics and genomics in their complexity and depth of detail (';isotomics'?). Several technologies are potentially useful for this field, including ';SNIF-NMR', gas source mass spectrometry and IR absorption spectroscopy. However, all well established methods have restrictive limits in the sizes of samples, types of analyzes, and the sorts of isotopologues that can be measured with useful precision. We will present an overview of several emerging instruments and techniques of high-resolution gas source mass spectrometry that may enable study of a large proportion of the isotopologues of a wide range of volatile and semi-volatile compounds, including many organics, with precisions and sample sizes suitable for a range of applications. A variety of isotopologues can be measured by combining information from the Thermo 253 Ultra (a new high resolution, multi-collector gas source mass spectrometer) and the Thermo DFS (a very high resolution single collector, but used here on a novel mode to achieve ~per mil precision ratio measurements), sometimes supplemented by conventional bulk isotopic measurements. It is possible to design methods in which no one of these sources of data meaningfully constrain abundances of specific isotopologues, but their combination fully and precisely constrains a large number. We have assembled a suite of instruments (including the prototype of the Ultra, and a modified version of the DFS that is capable of dual inlet analyses) that make it logistically straightforward to perform such multi-instrument analyses. Examples will be presented documenting the accuracy of these techniques for systems that are independently well known (e.g., isotopologues of methane), and the precision and internal consistency of results for larger, more complex molecules (e.g., a suite of singly and doubly substituted isotopologues of hexane and other moderate-molecular-weight organics).
Surface characterization protocol for precision aspheric optics
NASA Astrophysics Data System (ADS)
Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra
2017-10-01
In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.
Kylander, M E; Weiss, D J; Jeffries, T E; Kober, B; Dolgopolova, A; Garcia-Sanchez, R; Coles, B J
2007-01-16
An analytical protocol for rapid and reliable laser ablation-quadrupole (LA-Q)- and multi-collector (MC-) inductively coupled plasma-mass spectrometry (ICP-MS) analysis of Pb isotope ratios ((207)Pb/(206)Pb and (208)Pb/(206)Pb) in peats and lichens is developed. This technique is applicable to source tracing atmospheric Pb deposition in biomonitoring studies and sample screening. Reference materials and environmental samples were dry ashed and pressed into pellets for introduction by laser ablation. No binder was used to reduce contamination. LA-MC-ICP-MS internal and external precisions were <1.1% and <0.3%, respectively, on both (207)Pb/(206)Pb and (208)Pb/(206)Pb ratios. LA-Q-ICP-MS internal precisions on (207)Pb/(206)Pb and (208)Pb/(206)Pb ratios were lower with values for the different sample sets <14.3% while external precisions were <2.9%. The level of external precision acquired in this study is high enough to distinguish between most modern Pb sources. LA-MC-ICP-MS measurements differed from thermal ionisation mass spectrometry (TIMS) values by 1% or less while the accuracy obtained using LA-Q-ICP-MS compared to solution MC-ICP-MS was 3.1% or better using a run bracketing (RB) mass bias correction method. Sample heterogeneity and detector switching when measuring (208)Pb by Q-ICP-MS are identified as sources of reduced analytical performance.
Development of an X-Ray Catheter Final Report CRADA No. TC-1265-96
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebes, J.; Schlossberg, M.
2017-11-01
Toe goal of this CRADA project was to develop a catheter-based x-ray source to provide treatment of restenosis in arteries with a radiation source which can be precisely controlled and turned on and off at will.
Airborne LiDAR : a new source of traffic flow data : executive summary.
DOT National Transportation Integrated Search
2005-10-01
LiDAR (or airborne laser scanning) systems became a : dominant player in high-precision spatial data : acquisition in the late 90s. This new technology : quickly established itself as the main source of surface : information in commercial mapping,...
Airborne LiDAR : a new source of traffic flow data, executive summary report.
DOT National Transportation Integrated Search
2005-10-01
LiDAR (or airborne laser scanning) systems became a : dominant player in high-precision spatial data : acquisition in the late 90s. This new technology : quickly established itself as the main source of surface : information in commercial mapping,...
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Diagnostic for a high-repetition rate electron photo-gun and first measurements
NASA Astrophysics Data System (ADS)
Filippetto, D.; Doolittle, L.; Huang, G.; Norum, E.; Portmann, G.; Qian, H.; Sannibale, F.
2015-05-01
The APEX electron source at LBNL combines the high-repetition-rate with the high beam brightness typical of photoguns, delivering low emittance electron pulses at MHz frequency. Proving the high beam quality of the beam is an essential step for the success of the experiment, opening the doors of the high average power to brightness-hungry applications as X-Ray FELs, MHz ultrafast electron diffraction etc.. As first step, a complete characterization of the beam parameters is foreseen at the Gun beam energy of 750 keV. Diagnostics for low and high current measurements have been installed and tested, and measurements of cathode lifetime and thermal emittance in a RF environment with mA current performed. The recent installation of a double slit system, a deflecting cavity and a high precision spectrometer, allow the exploration of the full 6D phase space. Here we discuss the present layout of the machine and future upgrades, showing the latest results at low and high repetition rate, together with the tools and techniques used.
Determining the neutrino mass with cyclotron radiation emission spectroscopy—Project 8
NASA Astrophysics Data System (ADS)
Ashtari Esfahani, Ali; Asner, David M.; Böser, Sebastian; Cervantes, Raphael; Claessens, Christine; de Viveiros, Luiz; Doe, Peter J.; Doeleman, Shepard; Fernandes, Justin L.; Fertl, Martin; Finn, Erin C.; Formaggio, Joseph A.; Furse, Daniel; Guigue, Mathieu; Heeger, Karsten M.; Jones, A. Mark; Kazkaz, Kareem; Kofron, Jared A.; Lamb, Callum; LaRoque, Benjamin H.; Machado, Eric; McBride, Elizabeth L.; Miller, Michael L.; Monreal, Benjamin; Mohanmurthy, Prajwal; Nikkel, James A.; Oblath, Noah S.; Pettus, Walter C.; Hamish Robertson, R. G.; Rosenberg, Leslie J.; Rybka, Gray; Rysewyk, Devyn; Saldaña, Luis; Slocum, Penny L.; Sternberg, Matthew G.; Tedeschi, Jonathan R.; Thümmler, Thomas; VanDevender, Brent A.; E Vertatschitsch, Laura; Wachtendonk, Megan; Weintroub, Jonathan; Woods, Natasha L.; Young, André; Zayas, Evan M.
2017-05-01
The most sensitive direct method to establish the absolute neutrino mass is observation of the endpoint of the tritium beta-decay spectrum. Cyclotron radiation emission spectroscopy (CRES) is a precision spectrographic technique that can probe much of the unexplored neutrino mass range with { O }({eV}) resolution. A lower bound of m({ν }e)≳ 9(0.1) {meV} is set by observations of neutrino oscillations, while the KATRIN experiment—the current-generation tritium beta-decay experiment that is based on magnetic adiabatic collimation with an electrostatic (MAC-E) filter—will achieve a sensitivity of m({ν }e)≲ 0.2 {eV}. The CRES technique aims to avoid the difficulties in scaling up a MAC-E filter-based experiment to achieve a lower mass sensitivity. In this paper we review the current status of the CRES technique and describe Project 8, a phased absolute neutrino mass experiment that has the potential to reach sensitivities down to m({ν }e)≲ 40 {meV} using an atomic tritium source.
Precise charge measurement for laser plasma accelerators
NASA Astrophysics Data System (ADS)
Nakamura, Kei; Gonsalves, Anthony; Lin, Chen; Sokollik, Thomas; Shiraishi, Satomi; van Tilborg, Jeroen; Smith, Alan; Rodgers, Dave; Donahue, Rick; Byrne, Warren; Leemans, Wim
2011-10-01
A comprehensive study of charge diagnostics was conducted to verify their validity for measuring electron beams produced by laser plasma accelerators (LPAs). The electron energy dependence of a scintillating screen (Lanex Fast) was studied with sub-nanosecond electron beams ranging from 106 MeV to 1522 MeV at the Lawrence Berkeley National Laboratory Advanced Light Source (ALS) synchrotron booster accelerator. Using an integrating current transformer as a calibration reference, the sensitivity of the Lanex Fast was found to decrease by 1% per 100 MeV increase of the energy. By using electron beams from LPA, cross calibrations of the charge were carried out with an integrating current transformer, scintillating screen (Lanex from Kodak), and activation based measurement. The diagnostics agreed within ~8%, showing that they all can provide accurate charge measurements for LPAs provided necessary cares. Work supported by the Office of Science, Office of High Energy Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
Enhancing Spin Filters by Use of Bulk Inversion Asymmetry
NASA Technical Reports Server (NTRS)
Ting, David; Cartoixa,Xavier
2007-01-01
Theoretical calculations have shown that the degrees of spin polarization in proposed nonmagnetic semiconductor resonant tunneling spin filters could be increased through exploitation of bulk inversion asymmetry (BIA). These enhancements would be effected through suitable orientation of spin collectors (or spin-polarization- inducing lateral electric fields), as described below. Spin filters -- more precisely, sources of spin-polarized electron currents -- have been sought for research on, and development of, the emerging technological discipline of spintronics (spin-transport electronics). The proposed spin filters were to be based on the Rashba effect, which is an energy splitting of what would otherwise be degenerate quantum states, caused by a spinorbit interaction in conjunction with a structural-inversion asymmetry (SIA) in the presence of interfacial electric fields in a semiconductor heterostructure. The magnitude of the energy split is proportional to the electron wave number. In a spin filter, the spin-polarized currents produced by the Rashba effect would be extracted by quantum-mechanical resonant tunneling.
High-precision measurement of chlorine stable isotope ratios
Long, A.; Eastoe, C.J.; Kaufmann, R.S.; Martin, J.G.; Wirt, L.; Finley, J.B.
1993-01-01
We present an analysis procedure that allows stable isotopes of chlorine to be analyzed with precision sufficient for geological and hydrological studies. The total analytical precision is ?????0.09%., and the present known range of chloride in the surface and near-surface environment is 3.5???. As Cl- is essentially nonreactive in natural aquatic environments, it is a conservative tracer and its ??37Cl is also conservative. Thus, the ??37Cl parameter is valuable for quantitative evaluation of mixing of different sources of chloride in brines and aquifers. ?? 1993.
Precision Editing of Large Animal Genomes
Tan, Wenfang (Spring); Carlson, Daniel F.; Walton, Mark W.; Fahrenkrug, Scott C.; Hackett, Perry B.
2013-01-01
Transgenic animals are an important source of protein and nutrition for most humans and will play key roles in satisfying the increasing demand for food in an ever-increasing world population. The past decade has experienced a revolution in the development of methods that permit the introduction of specific alterations to complex genomes. This precision will enhance genome-based improvement of farm animals for food production. Precision genetics also will enhance the development of therapeutic biomaterials and models of human disease as resources for the development of advanced patient therapies. PMID:23084873
DOE Office of Scientific and Technical Information (OSTI.GOV)
Artaud, J.; Chaput, M.; Gerstenkorn, S.
1961-01-01
Isotopic analyses of mixtures of plutonium-239 and -240 were carried out by means of the photoelectric spectrometer, the source being a hollow cathode cooled by liquid nitrogen. The relative precision is of the order of 2%, for samples containieg 3% of Pu/sup 240/. The study of the reproductibility of the measurements should make it possible to increase the precision; the relative precision which can be expected from the method should be 1% for mixtures containing 1% of Pu/sup 240/. (auth)
Precision phase estimation based on weak-value amplification
NASA Astrophysics Data System (ADS)
Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei
2017-02-01
In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
Wang, Zheng; Zhou, Di; Wang, Hui; Jia, Zhenjun; Liu, Jing; Qian, Xiaoqin; Li, Chengtao; Hou, Yiping
2017-11-01
Massively parallel sequencing (MPS) technologies have proved capable of sequencing the majority of the key forensic STR markers. By MPS, not only the repeat-length size but also sequence variations could be detected. Recently, Thermo Fisher Scientific has designed an advanced MPS 32-plex panel, named the Precision ID GlobalFiler™ NGS STR Panel, where the primer set has been designed specifically for the purpose of MPS technologies and the data analysis are supported by a new version HID STR Genotyper Plugin (V4.0). In this study, a series of experiments that evaluated concordance, reliability, sensitivity of detection, mixture analysis, and the ability to analyze case-type and challenged samples were conducted. In addition, 106 unrelated Han individuals were sequenced to perform genetic analyses of allelic diversity. As expected, MPS detected broader allele variations and gained higher power of discrimination and exclusion rate. MPS results were found to be concordant with current capillary electrophoresis methods, and single source complete profiles could be obtained stably using as little as 100pg of input DNA. Moreover, this MPS panel could be adapted to case-type samples and partial STR genotypes of the minor contributor could be detected up to 19:1 mixture. Aforementioned results indicate that the Precision ID GlobalFiler™ NGS STR Panel is reliable, robust and reproducible and have the potential to be used as a tool for human forensics. Copyright © 2017 Elsevier B.V. All rights reserved.
Is flat fielding safe for precision CCD astronomy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Is flat fielding safe for precision CCD astronomy?
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
2017-07-06
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Constraining the mass–richness relationship of redMaPPer clusters with angular clustering
Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...
2016-08-04
The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less
High density scintillating glass proton imaging detector
NASA Astrophysics Data System (ADS)
Wilkinson, C. J.; Goranson, K.; Turney, A.; Xie, Q.; Tillman, I. J.; Thune, Z. L.; Dong, A.; Pritchett, D.; McInally, W.; Potter, A.; Wang, D.; Akgun, U.
2017-03-01
In recent years, proton therapy has achieved remarkable precision in delivering doses to cancerous cells while avoiding healthy tissue. However, in order to utilize this high precision treatment, greater accuracy in patient positioning is needed. An accepted approximate uncertainty of +/-3% exists in the current practice of proton therapy due to conversions between x-ray and proton stopping power. The use of protons in imaging would eliminate this source of error and lessen the radiation exposure of the patient. To this end, this study focuses on developing a novel proton-imaging detector built with high-density glass scintillator. The model described herein contains a compact homogeneous proton calorimeter composed of scintillating, high density glass as the active medium. The unique geometry of this detector allows for the measurement of both the position and residual energy of protons, eliminating the need for a separate set of position trackers in the system. Average position and energy of a pencil beam of 106 protons is used to reconstruct the image rather than by analyzing individual proton data. Simplicity and efficiency were major objectives in this model in order to present an imaging technique that is compact, cost-effective, and precise, as well as practical for a clinical setting with pencil-beam scanning proton therapy equipment. In this work, the development of novel high-density glass scintillator and the unique conceptual design of the imager are discussed; a proof-of-principle Monte Carlo simulation study is performed; preliminary two-dimensional images reconstructed from the Geant4 simulation are presented.
Simultaneous Mass Determination for Gravitationally Coupled Asteroids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, James; Chesley, Steven R., E-mail: jimbaer1@earthlink.net
The conventional least-squares asteroid mass determination algorithm allows us to solve for the mass of a large subject asteroid that is perturbing the trajectory of a smaller test asteroid. However, this algorithm is necessarily a first approximation, ignoring the possibility that the subject asteroid may itself be perturbed by the test asteroid, or that the encounter’s precise geometry may be entangled with encounters involving other asteroids. After reviewing the conventional algorithm, we use it to calculate the masses of 30 main-belt asteroids. Compared to our previous results, we find new mass estimates for eight asteroids (11 Parthenope, 27 Euterpe, 51more » Neimausa, 76 Freia, 121 Hermione, 324 Bamberga, 476 Hedwig, and 532 Herculina) and significantly more precise estimates for six others (2 Pallas, 3 Juno, 4 Vesta, 9 Metis, 16 Psyche, and 88 Thisbe). However, we also find that the conventional algorithm yields questionable results in several gravitationally coupled cases. To address such cases, we describe a new algorithm that allows the epoch state vectors of the subject asteroids to be included as solve-for parameters, allowing for the simultaneous solution of the masses and epoch state vectors of multiple subject and test asteroids. We then apply this algorithm to the same 30 main-belt asteroids and conclude that mass determinations resulting from current and future high-precision astrometric sources (such as Gaia ) should conduct a thorough search for possible gravitational couplings and account for their effects.« less
Measuring cosmic shear and birefringence using resolved radio sources
NASA Astrophysics Data System (ADS)
Whittaker, Lee; Battye, Richard A.; Brown, Michael L.
2018-02-01
We develop a new method of extracting simultaneous measurements of weak lensing shear and a local rotation of the plane of polarization using observations of resolved radio sources. The basis of the method is an assumption that the direction of the polarization is statistically linked with that of the gradient of the total intensity field. Using a number of sources spread over the sky, this method will allow constraints to be placed on cosmic shear and birefringence, and it can be applied to any resolved radio sources for which such a correlation exists. Assuming that the rotation and shear are constant across the source, we use this relationship to construct a quadratic estimator and investigate its properties using simulated observations. We develop a calibration scheme using simulations based on the observed images to mitigate a bias which occurs in the presence of measurement errors and an astrophysical scatter on the polarization. The method is applied directly to archival data of radio galaxies where we measure a mean rotation signal of $\\omega=-2.02^{\\circ}\\pm0.75^{\\circ}$ and an average shear compatible with zero using 30 reliable sources. This level of constraint on an overall rotation is comparable with current leading constraints from CMB experiments and is expected to increase by at least an order of magnitude with future high precision radio surveys, such as those performed by the SKA. We also measure the shear and rotation two-point correlation functions and estimate the number of sources required to detect shear and rotation correlations in future surveys.
Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Tan, Swie-In [San Jose, CA; Reiss, Ira [New City, NY
2002-07-30
A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
The many flavours of photometric redshifts
NASA Astrophysics Data System (ADS)
Salvato, Mara; Ilbert, Olivier; Hoyle, Ben
2018-06-01
Since more than 70 years ago, the colours of galaxies derived from flux measurements at different wavelengths have been used to estimate their cosmological distances. Such distance measurements, called photometric redshifts, are necessary for many scientific projects, ranging from investigations of the formation and evolution of galaxies and active galactic nuclei to precision cosmology. The primary benefit of photometric redshifts is that distance estimates can be obtained relatively cheaply for all sources detected in photometric images. The drawback is that these cheap estimates have low precision compared with resource-expensive spectroscopic ones. The methodology for estimating redshifts has been through several revolutions in recent decades, triggered by increasingly stringent requirements on the photometric redshift accuracy. Here, we review the various techniques for obtaining photometric redshifts, from template-fitting to machine learning and hybrid schemes. We also describe state-of-the-art results on current extragalactic samples and explain how survey strategy choices affect redshift accuracy. We close with a description of the photometric redshift efforts planned for upcoming wide-field surveys, which will collect data on billions of galaxies, aiming to investigate, among other matters, the stellar mass assembly and the nature of dark energy.
A broadband chip-scale optical frequency synthesizer at 2.7 × 10−16 relative uncertainty
Huang, Shu-Wei; Yang, Jinghui; Yu, Mingbin; McGuyer, Bart H.; Kwong, Dim-Lee; Zelevinsky, Tanya; Wong, Chee Wei
2016-01-01
Optical frequency combs—coherent light sources that connect optical frequencies with microwave oscillations—have become the enabling tool for precision spectroscopy, optical clockwork, and attosecond physics over the past decades. Current benchmark systems are self-referenced femtosecond mode-locked lasers, but Kerr nonlinear dynamics in high-Q solid-state microresonators has recently demonstrated promising features as alternative platforms. The advance not only fosters studies of chip-scale frequency metrology but also extends the realm of optical frequency combs. We report the full stabilization of chip-scale optical frequency combs. The microcomb’s two degrees of freedom, one of the comb lines and the native 18-GHz comb spacing, are simultaneously phase-locked to known optical and microwave references. Active comb spacing stabilization improves long-term stability by six orders of magnitude, reaching a record instrument-limited residual instability of 3.6mHz/τ. Comparing 46 nitride frequency comb lines with a fiber laser frequency comb, we demonstrate the unprecedented microcomb tooth-to-tooth relative frequency uncertainty down to 50 mHz and 2.7 × 10−16, heralding novel solid-state applications in precision spectroscopy, coherent communications, and astronomical spectrography. PMID:27152341
MicroRNA applications for prostate, ovarian and breast cancer in the era of precision medicine
Smith, Bethany; Agarwal, Priyanka
2017-01-01
The high degree of conservation in microRNA from Caenorhabditis elegans to humans has enabled relatively rapid implementation of findings in model systems to the clinic. The convergence of the capacity for genomic screening being implemented in the prevailing precision medicine initiative and the capabilities of microRNA to address these changes holds significant promise. However, prostate, ovarian and breast cancers are heterogeneous and face issues of evolving therapeutic resistance. The transforming growth factor-beta (TGFβ) signaling axis plays an important role in the progression of these cancers by regulating microRNAs. Reciprocally, microRNAs regulate TGFβ actions during cancer progression. One must consider the expression of miRNA in the tumor microenvironment a source of biomarkers of disease progression and a viable target for therapeutic targeting. The differential expression pattern of microRNAs in health and disease, therapeutic response and resistance has resulted in its application as robust biomarkers. With two microRNA mimetics in ongoing restorative clinical trials, the paradigm for future clinical studies rests on the current observational trials to validate microRNA markers of disease progression. Some of today’s biomarkers can be translated to the next generation of microRNA-based therapies. PMID:28289080
NASA Astrophysics Data System (ADS)
Ding, Xiang; Li, Fei; Zhang, Jiyan; Liu, Wenli
2016-10-01
Raman spectrometers are usually calibrated periodically to ensure their measurement accuracy of Raman shift. A combination of a piece of monocrystalline silicon chip and a low pressure discharge lamp is proposed as a candidate for the reference standard of Raman shift. A high precision calibration technique is developed to accurately determine the standard value of the silicon's Raman shift around 520cm-1. The technique is described and illustrated by measuring a piece of silicon chip against three atomic spectral lines of a neon lamp. A commercial Raman spectrometer is employed and its error characteristics of Raman shift are investigated. Error sources are evaluated based on theoretical analysis and experiments, including the sample factor, the instrumental factor, the laser factor and random factors. Experimental results show that the expanded uncertainty of the silicon's Raman shift around 520cm-1 can acheive 0.3 cm-1 (k=2), which is more accurate than most of currently used reference materials. The results are validated by comparison measurement between three Raman spectrometers. It is proved that the technique can remarkably enhance the accuracy of Raman shift, making it possible to use the silicon and the lamp to calibrate Raman spectrometers.
Microholography of Living Organisms.
ERIC Educational Resources Information Center
Solem, Johndale C.; Baldwin, George C.
1982-01-01
By using intense pulsed coherent x-ray sources it will be possible to obtain magnified three-dimensional images of living elementary biological structures at precisely defined instants. Discussed are sources/geometrics for x-ray holography, x-radiation interactions, factors affecting resolution, recording the hologram, high-intensity holography,…
Airborne LiDAR : a new source of traffic flow data, research implementation plan.
DOT National Transportation Integrated Search
2005-10-01
LiDAR (or airborne laser scanning) systems became a dominant player in high-precision spatial data acquisition in the late 90's. This new technology quickly established itself as the main source of surface information in commercial mapping, deliverin...
NASA Astrophysics Data System (ADS)
Huang, Y. W.; Berman, E. S.; Owano, T. G.; Verfaillie, J. G.; Oikawa, P. Y.; Baldocchi, D. D.; Still, C. J.; Gardner, A.; Baer, D. S.; Rastogi, B.
2015-12-01
Stable CO2 isotopes provide information on biogeochemical processes that occur at the soil-plant-atmosphere interface. While δ13C measurement can provide information on the sources of the CO2, be it photosynthesis, natural gas combustion, other fossil fuel sources, landfills or other sources, δ18O, and δ17O are thought to be determined by the hydrological cycling of the CO2. Though researchers have called for analytical tools for CO2 isotope measurements that are reliable and field-deployable, developing such instrument remains a challenge. The carbon dioxide isotope analyzer developed by Los Gatos Research (LGR) uses LGR's patented Off-Axis ICOS (Integrated Cavity Output Spectroscopy) technology and incorporates proprietary internal thermal control for high sensitivity and optimal instrument stability. This new and improved analyzer measures CO2 concentration as well as δ13C, δ18O, and δ17O from CO2 at natural abundance (150-2500 ppm). The laboratory precision is ±200 ppb (1σ) in CO2 at 1 s, with a long-term (2 min) precision of ±20 ppb. The 1-second precision for both δ13C and δ18O is 0.7 ‰, and for δ17O is 1.8 ‰. The long-term (2 min) precision for both δ13C and δ18O is 0.08 ‰, and for δ17O is 0.18 ‰. The instrument has improved precision, stability and user interface over previous LGR CO2 isotope instruments and can be easily programmed for periodic referencing and sampling from different sources when coupled with LGR's multiport inlet unit (MIU). We have deployed two of these instruments at two different field sites, one at Twitchell Island in Sacramento County, CA to monitor the CO2 isotopic fluxes from an alfalfa field from 6/29/2015-7/13/2015, and the other at the Wind River Experimental Forest in Washington to monitor primarily the oxygen isotopes of CO2 within the canopy from 8/4/2015 through mid-November 2015. Methodology, laboratory development and testing and field performance are presented.
Influence of local topography on precision irrigation management
USDA-ARS?s Scientific Manuscript database
Precision irrigation management is currently accomplished using spatial information about soil properties through soil series maps or electrical conductivity (EC measurements. Crop yield, however, is consistently influenced by local topography, both in rain-fed and irrigated environments. Utilizing ...
Relocation of Groningen seismicity using refracted waves
NASA Astrophysics Data System (ADS)
Ruigrok, E.; Trampert, J.; Paulssen, H.; Dost, B.
2015-12-01
The Groningen gas field is a giant natural gas accumulation in the Northeast of the Netherlands. The gas is in a reservoir at a depth of about 3 km. The naturally-fractured gas-filled sandstone extends roughly 45 by 25 km laterally and 140 m vertically. Decades of production have led to significant compaction of the sandstone. The (differential) compaction is thought to have reactivated existing faults and being the main driver of induced seismicity. Precise earthquake location is difficult due to a complicated subsurface, and that is the likely reason, the current hypocentre estimates do not clearly correlate with the well-known fault network. The seismic velocity model down to reservoir depth is quite well known from extensive seismic surveys and borehole data. Most to date earthquake detections, however, were made with a sparse pre-2015 seismic network. For shallow seismicity (<5 km depth) horizontal source-receiver distances tend to be much larger than vertical distances. Consequently, preferred source-receiver travel paths are refractions over high-velocity layers below the reservoir. However, the seismic velocities of layers below the reservoir are poorly known. We estimated an effective velocity model of the main refracting layer below the reservoir and use this for relocating past seismicity. We took advantage of vertical-borehole recordings for estimating precise P-wave (refraction) onset times and used a tomographic approach to find the laterally varying velocity field of the refracting layer. This refracting layer is then added to the known velocity model, and the combined model is used to relocate the past seismicity. From the resulting relocations we assess which of the faults are being reactivated.
Chen, Guoli; Yang, Zhaohai; Eshleman, James R.; Netto, George J.
2016-01-01
Precision medicine, a concept that has recently emerged and has been widely discussed, emphasizes tailoring medical care to individuals largely based on information acquired from molecular diagnostic testing. As a vital aspect of precision cancer medicine, targeted therapy has been proven to be efficacious and less toxic for cancer treatment. Colorectal cancer (CRC) is one of the most common cancers and among the leading causes for cancer related deaths in the United States and worldwide. By far, CRC has been one of the most successful examples in the field of precision cancer medicine, applying molecular tests to guide targeted therapy. In this review, we summarize the current guidelines for anti-EGFR therapy, revisit the roles of pathologists in an era of precision cancer medicine, demonstrate the transition from traditional “one test-one drug” assays to multiplex assays, especially by using next-generation sequencing platforms in the clinical diagnostic laboratories, and discuss the future perspectives of tumor heterogeneity associated with anti-EGFR resistance and immune checkpoint blockage therapy in CRC. PMID:27699178
Automatic Generation of Data Types for Classification of Deep Web Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngu, A H; Buttler, D J; Critchlow, T J
2005-02-14
A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less
[Current situation and thoughts on precision medicine about the treatment of tumor in China].
Guo, J C; Yuan, D
2016-07-01
With United States starting"precision medical plan", it is widespread all over the world and opens a new direction to the development of medicine. Our country also starts the plan, trying to take the opportunity. At present, tumor threats human health with high incidence and mortality. In China, the incidence and mortality of tumor has been on the rise.So the tumor has become one of the most important fields of precision medicine.Precision medicine, hoping to reveal the Chinese characteristics of precision medicine, and getting the personal and social maximize health benefits are discussed in the paper.
Dual current readout for precision plating
NASA Technical Reports Server (NTRS)
Iceland, W. F.
1970-01-01
Bistable amplifier prevents damage in the low range circuitry of a dual scale ammeter. It senses the current and switches automatically to the high range circuitry as the current rises above a preset level.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
Development of an opsonophagocytic killing assay for group a streptococcus.
Jones, Scott; Moreland, Nicole J; Zancolli, Marta; Raynes, Jeremy; Loh, Jacelyn M S; Smeesters, Pierre R; Sriskandan, Shiranee; Carapetis, Jonathan R; Fraser, John D; Goldblatt, David
2018-05-15
Group A Streptococcus (GAS) or Streptococcus pyogenes is responsible for an estimated 500,000 deaths worldwide each year. Protection against GAS infection is thought to be mediated by phagocytosis, enhanced by bacteria-specific antibody. There are no licenced GAS vaccines, despite many promising candidates in preclinical and early stage clinical development, the most advanced of which are based on the GAS M-protein. Vaccine progress has been hindered, in part, by the lack of a standardised functional assay suitable for vaccine evaluation. Current assays, developed over 50 years ago, rely on non-immune human whole blood as a source of neutrophils and complement. Variations in complement and neutrophil activity between donors result in variable data that is difficult to interpret. We have developed an opsonophagocytic killing assay (OPKA) for GAS that utilises dimethylformamide (DMF)-differentiated human promyelocytic leukemia cells (HL-60) as a source of neutrophils and baby rabbit complement, thus removing the major sources of variation in current assays. We have standardised the OPKA for several clinically relevant GAS strain types (emm1, emm6 and emm12) and have shown antibody-specific killing for each emm-type using M-protein specific rabbit antisera. Specificity was demonstrated by pre-incubation of the antisera with homologous M-protein antigens that blocked antibody-specific killing. Additional qualifications of the GAS OPKA, including the assessment of the accuracy, precision, linearity and the lower limit of quantification, were also performed. This GAS OPKA assay has the potential to provide a robust and reproducible platform to accelerate GAS vaccine development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Precision medicine for nurses: 101.
Lemoine, Colleen
2014-05-01
To introduce the key concepts and terms associated with precision medicine and support understanding of future developments in the field by providing an overview and history of precision medicine, related ethical considerations, and nursing implications. Current nursing, medical and basic science literature. Rapid progress in understanding the oncogenic drivers associated with cancer is leading to a shift toward precision medicine, where treatment is based on targeting specific genetic and epigenetic alterations associated with a particular cancer. Nurses will need to embrace the paradigm shift to precision medicine, expend the effort necessary to learn the essential terminology, concepts and principles, and work collaboratively with physician colleagues to best position our patients to maximize the potential that precision medicine can offer. Copyright © 2014 Elsevier Inc. All rights reserved.
Review on the progress of ultra-precision machining technologies
NASA Astrophysics Data System (ADS)
Yuan, Julong; Lyu, Binghai; Hang, Wei; Deng, Qianfa
2017-06-01
Ultra-precision machining technologies are the essential methods, to obtain the highest form accuracy and surface quality. As more research findings are published, such technologies now involve complicated systems engineering and been widely used in the production of components in various aerospace, national defense, optics, mechanics, electronics, and other high-tech applications. The conception, applications and history of ultra-precision machining are introduced in this article, and the developments of ultra-precision machining technologies, especially ultra-precision grinding, ultra-precision cutting and polishing are also reviewed. The current state and problems of this field in China are analyzed. Finally, the development trends of this field and the coping strategies employed in China to keep up with the trends are discussed.
Forming Mandrels for X-Ray Mirror Substrates
NASA Technical Reports Server (NTRS)
Blake, Peter N.; Saha. To,p; Zhang, Will; O'Dell, Stephen; Kester, Thomas; Jones, William
2011-01-01
Precision forming mandrels are one element in X-ray mirror development at NASA. Current mandrel fabrication process is capable of meeting the allocated precision requirements for a 5 arcsec telescope. A manufacturing plan is outlined for a large IXO-scale program.
NASA Astrophysics Data System (ADS)
Koch, Karl
2010-05-01
Quantitative modeling of infrasound signals and development and verification of the corresponding atmospheric propagation models requires the use of well-calibrated sources. Numerous sources have been detected by the currently installed network of about 40 of the final 60 IMS infrasound stations. Besides non-nuclear explosions such as mining and quarry blasts and atmospheric phenomena like auroras, these sources include meteorites, volcanic eruptions and supersonic aircraft including re-entering spacecraft and rocket launches. All these sources of infrasound have one feature in common, in that their source parameters are not precisely known and the quantitative interpretation of the corresponding signals is therefore somewhat ambiguous. A source considered well-calibrated has been identified producing repeated infrasound signals at the IMS infrasound station IS26 in the Bavarian forest. The source results from propulsion tests of the ARIANE-5 rocket's main engine at a testing facility near Heilbronn, southern Germany. The test facility is at a range of 320 km and a backazimuth of ~280° from IS26. Ground-truth information was obtained for nearly 100 tests conducted in a 5-year period. Review of the available data for IS26 revealed that at least 28 of these tests show signals above the background noise level. These signals are verified based on the consistency of various signal parameters, e.g., arrival times, durations, and estimates of propagation characteristics (backazimuth, apparent velocity). Signal levels observed are a factor of 2-8 above the noise and reach values of up to 250 mPa for peak amplitudes, and a factor of 2-3 less for RMS measurements. Furthermore, only tests conducted during the months from October to April produce observable signals, indicating a significant change in infrasound propagation conditions between summer and winter months.
Alternative Solvents and Technologies for Precision Cleaning of Aerospace Components
NASA Technical Reports Server (NTRS)
Grandelli, Heather; Maloney, Phillip; DeVor, Robert; Hintze, Paul
2014-01-01
Precision cleaning solvents for aerospace components and oxygen fuel systems, including currently used Vertrel-MCA, have a negative environmental legacy, high global warming potential, and have polluted cleaning sites. Thus, alternative solvents and technologies are being investigated with the aim of achieving precision contamination levels of less than 1 mg/sq ft. The technologies being evaluated are ultrasonic bath cleaning, plasma cleaning and supercritical carbon dioxide cleaning.
NASA Astrophysics Data System (ADS)
Hishikawa, Yoshihiro; Doi, Takuya; Higa, Michiya; Ohshima, Hironori; Takenouchi, Takakazu; Yamagoe, Kengo
2017-08-01
Precise outdoor measurement of the current-voltage (I-V) curves of photovoltaic (PV) modules is desired for many applications such as low-cost onsite performance measurement, monitoring, and diagnosis. Conventional outdoor measurement technologies have a problem in that their precision is low when the solar irradiance is unstable, hence, limiting the opportunity of precise measurement only on clear sunny days. The purpose of this study is to investigate an outdoor measurement procedure, that can improve both the measurement opportunity and precision. Fast I-V curve measurements within 0.2 s and synchronous measurement of irradiance using a PV module irradiance sensor very effectively improved the precision. A small standard deviation (σ) of the module’s maximum output power (P max) in the range of 0.7-0.9% is demonstrated, based on the basis of a 6 month experiment, that mainly includes partly sunny days and cloudy days, during which the solar irradiance is unstable. The σ was further improved to 0.3-0.5% by correcting the curves for the small variation of irradiance. This indicates that the procedure of this study enables much more reproducible I-V curve measurements than a conventional usual procedure under various climatic conditions. Factors that affect measurement results are discussed, to further improve the precision.
Toward the use of precision medicine for the treatment of head and neck squamous cell carcinoma.
Gong, Wang; Xiao, Yandi; Wei, Zihao; Yuan, Yao; Qiu, Min; Sun, Chongkui; Zeng, Xin; Liang, Xinhua; Feng, Mingye; Chen, Qianming
2017-01-10
Precision medicine is a new strategy that aims at preventing and treating human diseases by focusing on individual variations in people's genes, environment and lifestyle. Precision medicine has been used for cancer diagnosis and treatment and shows evident clinical efficacy. Rapid developments in molecular biology, genetics and sequencing technologies, as well as computational technology, has enabled the establishment of "big data", such as the Human Genome Project, which provides a basis for precision medicine. Head and neck squamous cell carcinoma (HNSCC) is an aggressive cancer with a high incidence rate and low survival rate. Current therapies are often aggressive and carry considerable side effects. Much research now indicates that precision medicine can be used for HNSCC and may achieve improved results. From this perspective, we present an overview of the current status, potential strategies, and challenges of precision medicine in HNSCC. We focus on targeted therapy based on cell the surface signaling receptors epidermal growth factor receptor (EGFR), vascular endothelial growth factor (VEGF) and human epidermal growth factor receptor-2 (HER2), and on the PI3K/AKT/mTOR, JAK/STAT3 and RAS/RAF/MEK/ERK cellular signaling pathways. Gene therapy for the treatment of HNSCC is also discussed.
Toward the use of precision medicine for the treatment of head and neck squamous cell carcinoma
Gong, Wang; Xiao, Yandi; Wei, Zihao; Yuan, Yao; Qiu, Min; Sun, Chongkui; Zeng, Xin; Liang, Xinhua; Feng, Mingye; Chen, Qianming
2017-01-01
Precision medicine is a new strategy that aims at preventing and treating human diseases by focusing on individual variations in people's genes, environment and lifestyle. Precision medicine has been used for cancer diagnosis and treatment and shows evident clinical efficacy. Rapid developments in molecular biology, genetics and sequencing technologies, as well as computational technology, has enabled the establishment of “big data”, such as the Human Genome Project, which provides a basis for precision medicine. Head and neck squamous cell carcinoma (HNSCC) is an aggressive cancer with a high incidence rate and low survival rate. Current therapies are often aggressive and carry considerable side effects. Much research now indicates that precision medicine can be used for HNSCC and may achieve improved results. From this perspective, we present an overview of the current status, potential strategies, and challenges of precision medicine in HNSCC. We focus on targeted therapy based on cell the surface signaling receptors epidermal growth factor receptor (EGFR), vascular endothelial growth factor (VEGF) and human epidermal growth factor receptor-2 (HER2), and on the PI3K/AKT/mTOR, JAK/STAT3 and RAS/RAF/MEK/ERK cellular signaling pathways. Gene therapy for the treatment of HNSCC is also discussed. PMID:27924064
Saltabayeva, Ulbosin; Garib, Victoria; Morenko, Marina; Rosenson, Rafail; Ispayeva, Zhanat; Gatauova, Madina; Zulus, Loreta; Karaulov, Alexander; Gastager, Felix; Valenta, Rudolf
2017-01-01
Allergen molecule-based diagnosis has been suggested to facilitate the identification of disease-causing allergen sources and the prescription of allergen-specific immunotherapy (AIT). The aim of the current study was to compare allergen molecule-based IgE serology with allergen extract-based skin testing for the identification of the disease-causing allergen sources. The study was conducted in an area where patients are exposed to pollen from multiple sources (trees, grasses, and weeds) at the same time to compare the diagnostic efficiency of the 2 forms of diagnosis. Patients from Astana, Kazakhstan, who suffered from pollen-induced allergy (n = 95) were subjected to skin prick testing (SPT) with a local panel of tree pollen, grass pollen, and weed pollen allergen extracts and IgE antibodies specific for marker allergen molecules (nArt v 1, nArt v 3, rAmb a 1, rPhl p 1, rPhl p 5, rBet v 1) were measured by ImmunoCAP. Direct and indirect costs for diagnosis based on SPT and marker allergen-based IgE serology as well as direct costs for immunotherapy depending on SPT and serological test results were calculated. The costs for SPT-based diagnosis per patient were lower than the costs for allergen molecule-based IgE serology. However, allergen molecule-based serology was more precise in detecting the disease-causing allergen sources. A lower number of immunotherapy treatments (n = 119) was needed according to molecular diagnosis as compared to extract-based diagnosis (n = 275), which considerably reduced the total costs for diagnosis and for a 3-year treatment from EUR 1,112.30 to 521.77 per patient. The results from this real-life study show that SPT is less expensive than allergen molecule-based diagnostic testing, but molecular diagnosis allowed more precise prescription of immunotherapy which substantially reduced treatment costs and combined costs for diagnosis and treatment. © 2017 The Author(s) Published by S. Karger AG, Basel.
Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H
2017-09-01
We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.
MuSET, A High Precision Logging Sensor For Downhole Spontaneous Electrical Potential.
NASA Astrophysics Data System (ADS)
Pezard, P. A.; Gautier, S.; Le Borgne, T.; Deltombe, J.
2008-12-01
MuSET has been designed by ALT and CNRS in the context of the EC ALIANCE research project. It is based on an existing multi-parameter borehole fluid sensor (p, T, Cw, pH, Eh) built by ALT. The new downhole geophysical tool aims to measure subsurface spontaneous electrical potentials (SP) in situ with great precision (< µV). For this, the device includes an unpolazirable Pb/PbCl2 electrode referred to a similar one at surface. Initial field testing in Montpellier (Languedoc, France), Ploemeur (Brittany, France) and Campos (Mallorca, Spain) took advantage of the set of field sites developed as part of ALIANCE then as part of the environmental research observatory (ORE) network for hydrogeology "H+". While Cretaceous marly limestone at Lavalette (Montpellier) proved to be almost exclusively the source of membrane potential, the clay-starved Miocene reefal carbonates of Campos generate a signal dominated by electrokinetic potential. This signal is generated due to nearby agricultural pumping, and associated strong horizontal flow. At the top of the salt to fresh water transtion, a discrepancy between the SP signal and the absence of vertical flow measured with a heat-pulse flowmeter hints at a capacity to detect the "fluid-junction", diffusion potential. At Ploemeur, the altered granite found in the vicinity of faults and fractures is also the source of a SP signal, mostly surface related while most fractures appear to be closed. In all, the MuSET demonstrates a capacity to identify several subsurface sources of natural electrical potential such as diffusion ones (membrane potential in the presence of clays, fickean processes due to pore fluid salinity gradients), or else the electrokinetic potential with pore fluid pressure gradients. While spontaneous electrical currents often loop out of the borehole, MuSET might be used as a radial electrical flowmeter once the diffusion components taken into account.
A novel design measuring method based on linearly polarized laser interference
NASA Astrophysics Data System (ADS)
Cao, Yanbo; Ai, Hua; Zhao, Nan
2013-09-01
The interferometric method is widely used in the precision measurement, including the surface quality of the large-aperture mirror. The laser interference technology has been developing rapidly as the laser sources become more and more mature and reliable. We adopted the laser diode as the source for the sake of the short coherent wavelength of it for the optical path difference of the system is quite shorter as several wavelengths, and the power of laser diode is sufficient for measurement and safe to human eye. The 673nm linearly laser was selected and we construct a novel form of interferometric system as we called `Closed Loop', comprised of polarizing optical components, such as polarizing prism and quartz wave plate, the light from the source split by which into measuring beam and referencing beam, they've both reflected by the measuring mirror, after the two beams transforming into circular polarization and spinning in the opposite directions we induced the polarized light synchronous phase shift interference technology to get the detecting fringes, which transfers the phase shifting in time domain to space, so that we did not need to consider the precise-controlled shift of optical path difference, which will introduce the disturbance of the air current and vibration. We got the interference fringes from four different CCD cameras well-alignment, and the fringes are shifted into four different phases of 0, π/2, π, and 3π/2 in time. After obtaining the images from the CCD cameras, we need to align the interference fringes pixel to pixel from different CCD cameras, and synthesis the rough morphology, after getting rid of systematic error, we could calculate the surface accuracy of the measuring mirror. This novel design detecting method could be applied into measuring the optical system aberration, and it would develop into the setup of the portable structural interferometer and widely used in different measuring circumstances.
Sherrell, Darren A.; Foster, Andrew J.; Hudson, Lee; ...
2015-01-01
The design and implementation of a compact and portable sample alignment system suitable for use at both synchrotron and free-electron laser (FEL) sources and its performance are described. The system provides the ability to quickly and reliably deliver large numbers of samples using the minimum amount of sample possible, through positioning of fixed target arrays into the X-ray beam. The combination of high-precision stages, high-quality sample viewing, a fast controller and a software layer overcome many of the challenges associated with sample alignment. A straightforward interface that minimizes setup and sample changeover time as well as simplifying communication with themore » stages during the experiment is also described, together with an intuitive naming convention for defining, tracking and locating sample positions. Lastly, the setup allows the precise delivery of samples in predefined locations to a specific position in space and time, reliably and simply.« less
Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann
2011-06-01
Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An open-source, extensible system for laboratory timing and control
NASA Astrophysics Data System (ADS)
Gaskell, Peter E.; Thorn, Jeremy J.; Alba, Sequoia; Steck, Daniel A.
2009-11-01
We describe a simple system for timing and control, which provides control of analog, digital, and radio-frequency signals. Our system differs from most common laboratory setups in that it is open source, built from off-the-shelf components, synchronized to a common and accurate clock, and connected over an Ethernet network. A simple bus architecture facilitates creating new and specialized devices with only moderate experience in circuit design. Each device operates independently, requiring only an Ethernet network connection to the controlling computer, a clock signal, and a trigger signal. This makes the system highly robust and scalable. The devices can all be connected to a single external clock, allowing synchronous operation of a large number of devices for situations requiring precise timing of many parallel control and acquisition channels. Provided an accurate enough clock, these devices are capable of triggering events separated by one day with near-microsecond precision. We have achieved precisions of ˜0.1 ppb (parts per 109) over 16 s.
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
Plasmonic micropillars for precision cell force measurement across a large field-of-view
NASA Astrophysics Data System (ADS)
Xiao, Fan; Wen, Ximiao; Tan, Xing Haw Marvin; Chiou, Pei-Yu
2018-01-01
A plasmonic micropillar platform with self-organized gold nanospheres is reported for the precision cell traction force measurement across a large field-of-view (FOV). Gold nanospheres were implanted into the tips of polymer micropillars by annealing gold microdisks with nanosecond laser pulses. Each gold nanosphere is physically anchored in the center of a pillar tip and serves as a strong, point-source-like light scattering center for each micropillar. This allows a micropillar to be clearly observed and precisely tracked even under a low magnification objective lens for the concurrent and precision measurement across a large FOV. A spatial resolution of 30 nm for the pillar deflection measurement has been accomplished on this platform with a 20× objective lens.
Canyval-x: Cubesat Astronomy by NASA and Yonsei Using Virtual Telescope Alignment Experiment
NASA Technical Reports Server (NTRS)
Shah, Neerav
2016-01-01
CANYVAL-X is a technology demonstration CubeSat mission with a primary objective of validating technologies that allow two spacecraft to fly in formation along an inertial line-of-sight (i.e., align two spacecraft to an inertial source). Demonstration of precision dual-spacecraft alignment achieving fine angular precision enables a variety of cutting-edge heliophysics and astrophysics science.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.
2013-12-01
Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.
Grabar, Natalia; Krivine, Sonia; Jaulent, Marie-Christine
2007-10-11
Making the distinction between expert and non expert health documents can help users to select the information which is more suitable for them, according to whether they are familiar or not with medical terminology. This issue is particularly important for the information retrieval area. In our work we address this purpose through stylistic corpus analysis and the application of machine learning algorithms. Our hypothesis is that this distinction can be performed on the basis of a small number of features and that such features can be language and domain independent. The used features were acquired in source corpus (Russian language, diabetes topic) and then tested on target (French language, pneumology topic) and source corpora. These cross-language features show 90% precision and 93% recall with non expert documents in source language; and 85% precision and 74% recall with expert documents in target language.
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
Glazer, A M; Collins, S P; Zekria, D; Liu, J; Golshan, M
2004-03-01
In 1947 Kathleen Lonsdale conducted a series of experiments on X-ray diffraction using a divergent beam external to a crystal sample. Unlike the Kossel technique, where divergent X-rays are excited by the presence of fluorescing atoms within the crystal, the use of an external divergent source made it possible to study non-fluorescing crystals. The resulting photographs not only illustrated the complexity of X-ray diffraction from crystals in a truly beautiful way, but also demonstrated unprecedented experimental precision. This long-forgotten work is repeated here using a synchrotron radiation source and, once again, considerable merit is found in Lonsdale's technique. The results of this experiment suggest that, through the use of modern 'third-generation' synchrotron sources, divergent-beam diffraction could soon enjoy a renaissance for high-precision lattice-parameter determination and the study of crystal perfection.
Araujo, Saulo de Freitas
2014-02-01
Wilhelm Wundt's biography is one of the main domains in Wundt scholarship that deserves more detailed attention. The few existing biographical works present many problems, ranging from vagueness to chronological inaccuracies, among others. One of the important gaps concerns the so-called Heidelberg period (1852-1874), during which he went from being a medical student to holding a professorship at the University of Heidelberg. The aim of this article is to dispel a very common confusion in the secondary literature, which refers to Wundt's assistantship with Helmholtz at the Physiological Institute, by establishing the precise dates of his assistantship. Contrary to what is generally repeated in the secondary literature, the primary sources allow us to determine precisely this period from October 1858 to March 1865. I conclude by pointing out the indispensability of the primary sources not only to Wundt scholarship but also to the historiography of psychology in general.
Carbon Nanotube Patterning on a Metal Substrate
NASA Technical Reports Server (NTRS)
Nguyen, Cattien V. (Inventor)
2016-01-01
A CNT electron source, a method of manufacturing a CNT electron source, and a solar cell utilizing a CNT patterned sculptured substrate are disclosed. Embodiments utilize a metal substrate which enables CNTs to be grown directly from the substrate. An inhibitor may be applied to the metal substrate to inhibit growth of CNTs from the metal substrate. The inhibitor may be precisely applied to the metal substrate in any pattern, thereby enabling the positioning of the CNT groupings to be more precisely controlled. The surface roughness of the metal substrate may be varied to control the density of the CNTs within each CNT grouping. Further, an absorber layer and an acceptor layer may be applied to the CNT electron source to form a solar cell, where a voltage potential may be generated between the acceptor layer and the metal substrate in response to sunlight exposure.
Shelly, David R.; Hardebeck, Jeanne L.
2010-01-01
We precisely locate 88 tremor families along the central San Andreas Fault using a 3D velocity model and numerous P and S wave arrival times estimated from seismogram stacks of up to 400 events per tremor family. Maximum tremor amplitudes vary along the fault by at least a factor of 7, with by far the strongest sources along a 25 km section of the fault southeast of Parkfield. We also identify many weaker tremor families, which have largely escaped prior detection. Together, these sources extend 150 km along the fault, beneath creeping, transitional, and locked sections of the upper crustal fault. Depths are mostly between 18 and 28 km, in the lower crust. Epicenters are concentrated within 3 km of the surface trace, implying a nearly vertical fault. A prominent gap in detectible activity is located directly beneath the region of maximum slip in the 2004 magnitude 6.0 Parkfield earthquake.
Pretorius, Etheresia
2017-01-01
The latest statistics from the 2016 heart disease and stroke statistics update shows that cardiovascular disease is the leading global cause of death, currently accounting for more than 17.3 million deaths per year. Type II diabetes is also on the rise with out-of-control numbers. To address these pandemics, we need to treat patients using an individualized patient care approach, but simultaneously gather data to support the precision medicine initiative. Last year the NIH announced the precision medicine initiative to generate novel knowledge regarding diseases, with a near-term focus on cancers, followed by a longer-term aim, applicable to a whole range of health applications and diseases. The focus of this paper is to suggest a combined effort between the latest precision medicine initiative, researchers and clinicians; whereby novel techniques could immediately make a difference in patient care, but long-term add to knowledge for use in precision medicine. We discuss the intricate relationship between individualized patient care and precision medicine and the current thoughts regarding which data is actually suitable for the precision medicine data gathering. The uses of viscoelastic techniques in precision medicine are discussed and how these techniques might give novel perspectives on the success of treatment regimes of cardiovascular patients are explored. Thrombo-embolic stroke, rheumathoid arthritis and type II diabetes are used as examples of diseases where precision medicine and a patient-orientated approach can possibly be implemented. In conclusion it is suggested that if all role players work together by embracing a new way of thought in treating and managing cardiovascular disease and diabetes will we be able to adequately address these out-ofcontrol conditions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Design and characterization of a nano-Newton resolution thrust stand
NASA Astrophysics Data System (ADS)
Soni, J.; Roy, S.
2013-09-01
The paper describes the design, calibration, and characterization of a thrust stand capable of nano-Newton resolution. A low uncertainty calibration method is proposed and demonstrated. A passive eddy current based damper, which is non-contact and vacuum compatible, is employed. Signal analysis techniques are used to perform noise characterization, and potential sources are identified. Calibrated system noise floor suggests thrust measurement resolution of the order of 10 nN is feasible under laboratory conditions. Force measurement from this balance for a standard macroscale dielectric barrier discharge (DBD) plasma actuator is benchmarked with a commercial precision balance of 9.8 μN resolution and is found to be in good agreement. Published results of a microscale DBD plasma actuator force measurement and low pressure characterization of conventional plasma actuators are presented for completeness.
[Münchausen syndrome by proxy].
Berent, Dominika; Florkowski, Antoni; Gałecki, Piotr
2010-01-01
Münchausen syndrome by proxy (MSBP), is a psychiatric disorder, a particular form of child abuse. An impaired emotional relationship exists mainly between the mother and her child. According to the variety of victims' symptoms, all medicine doctors may deal with this syndrome in every day clinical practice. Still insufficient knowledge about the syndrome and its' rare consideration in the differential diagnosis result in only severe, potentially lethal cases recognition. For many years the rest remains a source of a long-term physical and mental injuries in victims. About 30 years from the first attempt to precisely point the signalling symptoms for a proper diagnosis, we present the current knowledge on epidemiology, aetiology, diagnostic criteria, advised management and psychological portrait of the mother with the syndrome and her child, the syndrome's victim.
Nanosensor Technology Applied to Living Plant Systems.
Kwak, Seon-Yeong; Wong, Min Hao; Lew, Tedrick Thomas Salim; Bisker, Gili; Lee, Michael A; Kaplan, Amir; Dong, Juyao; Liu, Albert Tianxiang; Koman, Volodymyr B; Sinclair, Rosalie; Hamann, Catherine; Strano, Michael S
2017-06-12
An understanding of plant biology is essential to solving many long-standing global challenges, including sustainable and secure food production and the generation of renewable fuel sources. Nanosensor platforms, sensors with a characteristic dimension that is nanometer in scale, have emerged as important tools for monitoring plant signaling pathways and metabolism that are nondestructive, minimally invasive, and capable of real-time analysis. This review outlines the recent advances in nanotechnology that enable these platforms, including the measurement of chemical fluxes even at the single-molecule level. Applications of nanosensors to plant biology are discussed in the context of nutrient management, disease assessment, food production, detection of DNA proteins, and the regulation of plant hormones. Current trends and future needs are discussed with respect to the emerging trends of precision agriculture, urban farming, and plant nanobionics.
Precise estimation of tropospheric path delays with GPS techniques
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1990-01-01
Tropospheric path delays are a major source of error in deep space tracking. However, the tropospheric-induced delay at tracking sites can be calibrated using measurements of Global Positioning System (GPS) satellites. A series of experiments has demonstrated the high sensitivity of GPS to tropospheric delays. A variety of tests and comparisons indicates that current accuracy of the GPS zenith tropospheric delay estimates is better than 1-cm root-mean-square over many hours, sampled continuously at intervals of six minutes. These results are consistent with expectations from covariance analyses. The covariance analyses also indicate that by the mid-1990s, when the GPS constellation is complete and the Deep Space Network is equipped with advanced GPS receivers, zenith tropospheric delay accuracy with GPS will improve further to 0.5 cm or better.
Role of Lidar Technology in Future NASA Space Missions
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin
2008-01-01
The past success of lidar instruments in space combined with potentials of laser remote sensing techniques in improving measurements traditionally performed by other instrument technologies and in enabling new measurements have expanded the role of lidar technology in future NASA missions. Compared with passive optical and active radar/microwave instruments, lidar systems produce substantially more accurate and precise data without reliance on natural light sources and with much greater spatial resolution. NASA pursues lidar technology not only as science instruments, providing atmospherics and surface topography data of Earth and other solar system bodies, but also as viable guidance and navigation sensors for space vehicles. This paper summarizes the current NASA lidar missions and describes the lidar systems being considered for deployment in space in the near future.
NASA Astrophysics Data System (ADS)
Subashchandran, Shanthi; Okamoto, Ryo; Zhang, Labao; Tanaka, Akira; Okano, Masayuki; Kang, Lin; Chen, Jian; Wu, Peiheng; Takeuchi, Shigeki
2013-10-01
The realization of an ultralow-dark-count rate (DCR) along with the conservation of high detection efficiency (DE) is critical for many applications using single photon detectors in quantum information technologies, material sciences, and biological sensing. For this purpose, a fiber-coupled superconducting nanowire single-photon detector (SNSPD) with a meander-type niobium nitride nanowire (width: 50 nm) is studied. Precise measurements of the bias current dependence of DE are carried out for a wide spectral range (from 500 to 1650 nm in steps of 50 nm) using a white light source and a laser line Bragg tunable band-pass filter. An ultralow DCR (0.0015 cps) and high DE (32%) are simultaneously achieved by the SNSPD at a wavelength of 500 nm.
NASA Technical Reports Server (NTRS)
Edwards, C. D.
1990-01-01
Connected-element interferometry (CEI) has the potential to provide high-accuracy angular spacecraft tracking on short baselines by making use of the very precise phase delay observable. Within the Goldstone Deep Space Communications Complex (DSCC), one of three tracking complexes in the NASA Deep Space Network, baselines of up to 21 km in length are available. Analysis of data from a series of short-baseline phase-delay interferometry experiments are presented to demonstrate the potential tracking accuracy on these baselines. Repeated differential observations of pairs of angularly close extragalactic radio sources were made to simulate differential spacecraft-quasar measurements. Fiber-optic data links and a correlation processor are currently being developed and installed at Goldstone for a demonstration of real-time CEI in 1990.
High Power Helicon Plasma Source for Plasma Processing
NASA Astrophysics Data System (ADS)
Prager, James; Ziemba, Timothy; Miller, Kenneth E.
2015-09-01
Eagle Harbor Technologies (EHT), Inc. is developing a high power helicon plasma source. The high power nature and pulsed neutral gas make this source unique compared to traditional helicon source. These properties produce a plasma flow along the magnetic field lines, and therefore allow the source to be decoupled from the reaction chamber. Neutral gas can be injected downstream, which allows for precision control of the ion-neutral ratio at the surface of the sample. Although operated at high power, the source has demonstrated very low impurity production. This source has applications to nanoparticle productions, surface modification, and ionized physical vapor deposition.
Radiant Temperature Nulling Radiometer
NASA Technical Reports Server (NTRS)
Ryan, Robert (Inventor)
2003-01-01
A self-calibrating nulling radiometer for non-contact temperature measurement of an object, such as a body of water, employs a black body source as a temperature reference, an optomechanical mechanism, e.g., a chopper, to switch back and forth between measuring the temperature of the black body source and that of a test source, and an infrared detection technique. The radiometer functions by measuring radiance of both the test and the reference black body sources; adjusting the temperature of the reference black body so that its radiance is equivalent to the test source; and, measuring the temperature of the reference black body at this point using a precision contact-type temperature sensor, to determine the radiative temperature of the test source. The radiation from both sources is detected by an infrared detector that converts the detected radiation to an electrical signal that is fed with a chopper reference signal to an error signal generator, such as a synchronous detector, that creates a precision rectified signal that is approximately proportional to the difference between the temperature of the reference black body and that of the test infrared source. This error signal is then used in a feedback loop to adjust the reference black body temperature until it equals that of the test source, at which point the error signal is nulled to zero. The chopper mechanism operates at one or more Hertz allowing minimization of l/f noise. It also provides pure chopping between the black body and the test source and allows continuous measurements.
Single-Electron Detection and Spectroscopy via Relativistic Cyclotron Radiation.
Asner, D M; Bradley, R F; de Viveiros, L; Doe, P J; Fernandes, J L; Fertl, M; Finn, E C; Formaggio, J A; Furse, D; Jones, A M; Kofron, J N; LaRoque, B H; Leber, M; McBride, E L; Miller, M L; Mohanmurthy, P; Monreal, B; Oblath, N S; Robertson, R G H; Rosenberg, L J; Rybka, G; Rysewyk, D; Sternberg, M G; Tedeschi, J R; Thümmler, T; VanDevender, B A; Woods, N L
2015-04-24
It has been understood since 1897 that accelerating charges must emit electromagnetic radiation. Although first derived in 1904, cyclotron radiation from a single electron orbiting in a magnetic field has never been observed directly. We demonstrate single-electron detection in a novel radio-frequency spectrometer. The relativistic shift in the cyclotron frequency permits a precise electron energy measurement. Precise beta electron spectroscopy from gaseous radiation sources is a key technique in modern efforts to measure the neutrino mass via the tritium decay end point, and this work demonstrates a fundamentally new approach to precision beta spectroscopy for future neutrino mass experiments.
Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker; Fredenslund, Anders M; Scheutz, Charlotte
2018-09-01
The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument and tracer gas combinations. Analytical instruments with a high detection frequency and good precision were established as the most suitable for successful TDM application. The application of an instrument with a poor precision could only to some extent be overcome by applying a higher tracer gas release rate. A sideward misplacement of the tracer gas release point of about 250m resulted in an emission rate comparable to those obtained using a tracer gas correctly simulating the methane emission. Conversely, an upwind misplacement of about 150m resulted in an emission rate overestimation of almost 50%, showing the importance of proper emission source simulation when applying the TDM. Copyright © 2018 Elsevier B.V. All rights reserved.
Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment
NASA Astrophysics Data System (ADS)
Chapman, Michael A.; Min, Cao; Zhang, Deijin
2016-06-01
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
El-Amrawy, Fatema
2015-01-01
Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039
El-Amrawy, Fatema; Nounou, Mohamed Ismail
2015-10-01
The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure.
Li, Tingting; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo
2017-09-07
This study aimed to investigate the status of internal quality control (IQC) for cardiac biomarkers from 2011 to 2016 so that we can have overall knowledge of the precision level of measurements in China and set appropriate precision specifications. Internal quality control data of cardiac biomarkers, including creatinine kinase MB (CK-MB) (μg/L), CK-MB(U/L), myoglobin (Mb), cardiac troponin I (cTnI), cardiac troponin T (cTnT), and homocysteines (HCY), were collected by a web-based external quality assessment (EQA) system. Percentages of laboratories meeting five precision quality specifications for current coefficient of variations (CVs) were calculated. Then, appropriate precision specifications were chosen for these six analytes. Finally, the CVs and IQC practice were further analyzed with different grouping methods. The current CVs remained nearly constant for 6 years. cTnT had the highest pass rates every year against five specifications, whereas HCY had the lowest pass rates. Overall, most analytes had a satisfactory performance (pass rates >80%), except for HCY, if one-third TEa or the minimum specification were employed. When the optimal specification was applied, the performance of most analytes was frustrating (pass rates < 60%) except for cTnT. The appropriate precision specifications of Mb, cTnI, cTnT and HCY were set as current CVs less than 9.20%, 9.90%, 7.50%, 10.54%, 7.63%, and 6.67%, respectively. The data of IQC practices indicated wide variation and substantial progress. The precision performance of cTnT was already satisfying, while the other five analytes, especially HCY, were still frustrating; thus, ongoing investigation and continuous improvement for IQC are still needed. © 2017 Wiley Periodicals, Inc.
Critical Care and Personalized or Precision Medicine: Who needs whom?
Sugeir, Shihab; Naylor, Stephen
2018-02-01
The current paradigm of modern healthcare is a reactive response to patient symptoms, subsequent diagnosis and corresponding treatment of the specific disease(s). This approach is predicated on methodologies first espoused by the Cnidean School of Medicine approximately 2500years ago. More recently escalating healthcare costs and relatively poor disease treatment outcomes have fermented a rethink in how we carry out medical practices. This has led to the emergence of "P-Medicine" in the form of Personalized and Precision Medicine. The terms are used interchangeably, but in fact there are significant differences in the way they are implemented. The former relies on an "N-of-1" model whereas the latter uses a "1-in-N" model. Personalized Medicine is still in a fledgling and evolutionary phase and there has been much debate over its current status and future prospects. A confounding factor has been the sudden development of Precision Medicine, which has currently captured the imagination of policymakers responsible for modern healthcare systems. There is some confusion over the terms Personalized versus Precision Medicine. Here we attempt to define the key differences and working definitions of each P-Medicine approach, as well as a taxonomic relationship tree. Finally, we discuss the impact of Personalized and Precision Medicine on the practice of Critical Care Medicine (CCM). Practitioners of CCM have been participating in Personalized Medicine unknowingly as it takes the protocols of sepsis, mechanical ventilation, and daily awakening trials and applies it to each individual patient. However, the immediate next step for CCM should be an active development of Precision Medicine. This developmental process should break down the silos of modern medicine and create a multidisciplinary approach between clinicians and basic/translational scientists. Copyright © 2017 Elsevier Inc. All rights reserved.
Gaia, an all-sky survey for standard photometry
NASA Astrophysics Data System (ADS)
Carrasco, J. M.; Weiler, M.; Jordi, C.; Fabricius, C.
2017-03-01
Gaia ESA's space mission (launched in 2013) includes two low resolution spectroscopic instruments (one in the blue, BP, and another in the red, RP, wavelength domains) to classify and derive the astrophysical parameters of the observed sources. As it is well known, Gaia is a full-sky unbiased survey down to about 20th magnitude. The scanning law yields a rather uniform coverage of the sky over the full extent (a minimum of 5 years) of the mission. Gaia data reduction is a global one over the full mission. Both sky coverage and data reduction strategy ensure an unprecedented all-sky homogeneous spectrophotometric survey. Certainly, that survey is of interest for current and future on-ground and space projects, like LSST, PLATO, EUCLID and J-PAS/J-PLUS among others. These projects will benefit from the large amount (more than one billion) and wide variety of objects observed by Gaia with good quality spectrophotometry. Synthetic photometry derived from Gaia spectrophotometry for any passband can be used to expand the set of standard sources for these new instruments to come. In the current Gaia data release scenario, BP/RP spectrophotometric data will be available in the third release (in 2018, TBC). Current preliminary results allow us to estimate the precision of synthetic photometry derived from the Gaia data. This already allows the preparation of the on-going and future surveys and space missions. We discuss here the exploitation of the Gaia spectrophotometry as standard reference due to its full-sky coverage and its expected photometric uncertainties derived from the low resolution Gaia spectra.
Convolutional neural networks for transient candidate vetting in large-scale surveys
NASA Astrophysics Data System (ADS)
Gieseke, Fabian; Bloemen, Steven; van den Bogaard, Cas; Heskes, Tom; Kindler, Jonas; Scalzo, Richard A.; Ribeiro, Valério A. R. M.; van Roestel, Jan; Groot, Paul J.; Yuan, Fang; Möller, Anais; Tucker, Brad E.
2017-12-01
Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive use of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional pre-processing steps - eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3 per cent of all 'real' and 99.7 per cent of all 'bogus' instances on a test set containing 1942 'bogus' and 227 'real' instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.
NASA Astrophysics Data System (ADS)
Yamazaki, Ken'ichi
2016-07-01
Fault ruptures in the Earth's crust generate both elastic and electromagnetic (EM) waves. If the corresponding EM signals can be observed, then earthquakes could be detected before the first seismic waves arrive. In this study, I consider the piezomagnetic effect as a mechanism that converts elastic waves to EM energy, and I derive analytical formulas for the conversion process. The situation considered in this study is a whole-space model, in which elastic and EM properties are uniform and isotropic. In this situation, the governing equations of the elastic and EM fields, combined with the piezomagnetic constitutive law, can be solved analytically in the time domain by ignoring the displacement current term. Using the derived formulas, numerical examples are investigated, and the corresponding characteristics of the expected magnetic signals are resolved. I show that temporal variations in the magnetic field depend strongly on the electrical conductivity of the medium, meaning that precise detection of signals generated by the piezomagnetic effect is generally difficult. Expected amplitudes of piezomagnetic signals are estimated to be no larger than 0.3 nT for earthquakes with a moment magnitude of ≥7.0 at a source distance of 25 km; however, this conclusion may not extend to the detection of real earthquakes, because piezomagnetic stress sensitivity is currently poorly constrained.
Estimating Biases for Regional Methane Fluxes using Co-emitted Tracers
NASA Astrophysics Data System (ADS)
Bambha, R.; Safta, C.; Michelsen, H. A.; Cui, X.; Jeong, S.; Fischer, M. L.
2017-12-01
Methane is a powerful greenhouse gas, and the development and improvement of emissions models rely on understanding the flux of methane released from anthropogenic sources relative to releases from other sources. Increasing production of shale oil and gas in the mid-latitudes and associated fugitive emissions are suspected to be a dominant contributor to the global methane increase. Landfills, sewage treatment, and other sources may be dominant sources in some parts of the U.S. Large discrepancies between emissions models present a great challenge to reconciling atmospheric measurements with inventory-based estimates for various emissions sectors. Current approaches for measuring regional emissions yield highly uncertain estimates because of the sparsity of measurement sites and the presence of multiple simultaneous sources. Satellites can provide wide spatial coverage at the expense of much lower measurement precision compared to ground-based instruments. Methods for effective assimilation of data from a variety of sources are critically needed to perform regional GHG attribution with existing measurements and to determine how to structure future measurement systems including satellites. We present a hierarchical Bayesian framework to estimate surface methane fluxes based on atmospheric concentration measurements and a Lagrangian transport model (Weather Research and Forecasting and Stochastic Time-Inverted Lagrangian Transport). Structural errors in the transport model are estimated with the help of co-emitted traces species with well defined decay rates. We conduct the analyses at regional scales that are based on similar geographical and meteorological conditions. For regions where data are informative, we further refine flux estimates by emissions sector and infer spatially and temporally varying biases parameterized as spectral random field representations.
ExpertEyes: open-source, high-definition eyetracking.
Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas
2015-03-01
ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y. John
2016-06-15
Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.« less
Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2016-12-01
In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
A novel comparison of Møller and Compton electron-beam polarimeters
Magee, J. A.; Narayan, A.; Jones, D.; ...
2017-01-19
We have performed a novel comparison between electron-beam polarimeters based on Moller and Compton scattering. A sequence of electron-beam polarization measurements were performed at low beam currents (more » $<$ 5 $$\\mu$$A) during the $$Q_{\\rm weak}$$ experiment in Hall C at Jefferson Lab. These low current measurements were bracketed by the regular high current (180 $$\\mu$$A) operation of the Compton polarimeter. All measurements were found to be consistent within experimental uncertainties of 1% or less, demonstrating that electron polarization does not depend significantly on the beam current. This result lends confidence to the common practice of applying Moller measurements made at low beam currents to physics experiments performed at higher beam currents. Here, the agreement between two polarimetry techniques based on independent physical processes sets an important benchmark for future precision asymmetry measurements that require sub-1% precision in polarimetry.« less
A novel comparison of Møller and Compton electron-beam polarimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magee, J. A.; Narayan, A.; Jones, D.
We have performed a novel comparison between electron-beam polarimeters based on Moller and Compton scattering. A sequence of electron-beam polarization measurements were performed at low beam currents (more » $<$ 5 $$\\mu$$A) during the $$Q_{\\rm weak}$$ experiment in Hall C at Jefferson Lab. These low current measurements were bracketed by the regular high current (180 $$\\mu$$A) operation of the Compton polarimeter. All measurements were found to be consistent within experimental uncertainties of 1% or less, demonstrating that electron polarization does not depend significantly on the beam current. This result lends confidence to the common practice of applying Moller measurements made at low beam currents to physics experiments performed at higher beam currents. Here, the agreement between two polarimetry techniques based on independent physical processes sets an important benchmark for future precision asymmetry measurements that require sub-1% precision in polarimetry.« less
Precision engineering: an evolutionary perspective.
Evans, Chris J
2012-08-28
Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.
Precision medicine for psychopharmacology: a general introduction.
Shin, Cheolmin; Han, Changsu; Pae, Chi-Un; Patkar, Ashwin A
2016-07-01
Precision medicine is an emerging medical model that can provide accurate diagnoses and tailored therapeutic strategies for patients based on data pertaining to genes, microbiomes, environment, family history and lifestyle. Here, we provide basic information about precision medicine and newly introduced concepts, such as the precision medicine ecosystem and big data processing, and omics technologies including pharmacogenomics, pharamacometabolomics, pharmacoproteomics, pharmacoepigenomics, connectomics and exposomics. The authors review the current state of omics in psychiatry and the future direction of psychopharmacology as it moves towards precision medicine. Expert commentary: Advances in precision medicine have been facilitated by achievements in multiple fields, including large-scale biological databases, powerful methods for characterizing patients (such as genomics, proteomics, metabolomics, diverse cellular assays, and even social networks and mobile health technologies), and computer-based tools for analyzing large amounts of data.
Time-of-flight scattering and recoiling spectrometer (TOF-SARS) for surface analysis
NASA Astrophysics Data System (ADS)
Grizzi, O.; Shi, M.; Bu, H.; Rabalais, J. W.
1990-02-01
A UHV spectrometer system has been designed and constructed for time-of-flight scattering and recoiling spectrometry (TOF-SARS). The technique uses a pulsed primary ion beam and TOF methods for analysis of both scattered and recoiled neutrals (N) and ions (I) simultaneously with continuous scattering angle variation over a flight path of ≊1 m. The pulsed ion beam line uses an electron impact ionization source with acceleration up to 5 keV; pulse widths down to 20 ns with average current densities of 0.05-5.0 nA/mm2 have been obtained. Typical current densities used herein are ≊0.1 nA/mm2 and TOF spectra can be collected with a total ion dose of <10-3 ions/surface atom. A channel electron multiplier detector, which is sensitive to both ions and fast neutrals, is mounted on a long tube connected to a precision rotary motion feedthru, allowing continuous rotation over a scattering angular range 0°<θ<165°. The sample is mounted on a precision manipulator, allowing azimuthal δ and incident α angle rotation, as well as translation along three orthogonal axes. The system also accommodates standard surface analysis instrumentation for LEED, AES, XPS, and UPS. The capabilities of the system are demonstrated by the following examples: (A) TOF spectra versus scattering angle θ; (B) comparison to LEED and AES; (C) surface and adsorbate structure determinations; (D) monitoring surface roughness; (E) surface semichanneling measurements; (F) measurements of scattered ion fractions; and (G) ion induced Auger electron emission.
Experiments and simulation of thermal behaviors of the dual-drive servo feed system
NASA Astrophysics Data System (ADS)
Yang, Jun; Mei, Xuesong; Feng, Bin; Zhao, Liang; Ma, Chi; Shi, Hu
2015-01-01
The machine tool equipped with the dual-drive servo feed system could realize high feed speed as well as sharp precision. Currently, there is no report about the thermal behaviors of the dual-drive machine, and the current research of the thermal characteristics of machines mainly focuses on steady simulation. To explore the influence of thermal characterizations on the precision of a jib boring machine assembled dual-drive feed system, the thermal equilibrium tests and the research on thermal-mechanical transient behaviors are carried out. A laser interferometer, infrared thermography and a temperature-displacement acquisition system are applied to measure the temperature distribution and thermal deformation at different feed speeds. Subsequently, the finite element method (FEM) is used to analyze the transient thermal behaviors of the boring machine. The complex boundary conditions, such as heat sources and convective heat transfer coefficient, are calculated. Finally, transient variances in temperatures and deformations are compared with the measured values, and the errors between the measurement and the simulation of the temperature and the thermal error are 2 °C and 2.5 μm, respectively. The researching results demonstrate that the FEM model can predict the thermal error and temperature distribution very well under specified operating condition. Moreover, the uneven temperature gradient is due to the asynchronous dual-drive structure that results in thermal deformation. Additionally, the positioning accuracy decreases as the measured point became further away from the motor, and the thermal error and equilibrium period both increase with feed speeds. The research proposes a systematical method to measure and simulate the boring machine transient thermal behaviors.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hatkopf, William I.; Raghavan, Deepak
2008-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Raghavan, Deepak
2007-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
First Results From A Multi-Ion Beam Lithography And Processing System At The University Of Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gila, Brent; Appleton, Bill R.; Fridmann, Joel
2011-06-01
The University of Florida (UF) have collaborated with Raith to develop a version of the Raith ionLiNE IBL system that has the capability to deliver multi-ion species in addition to the Ga ions normally available. The UF system is currently equipped with a AuSi liquid metal alloy ion source (LMAIS) and ExB filter making it capable of delivering Au and Si ions and ion clusters for ion beam processing. Other LMAIS systems could be developed in the future to deliver other ion species. This system is capable of high performance ion beam lithography, sputter profiling, maskless ion implantation, ion beammore » mixing, and spatial and temporal ion beam assisted writing and processing over large areas (100 mm2)--all with selected ion species at voltages from 15-40 kV and nanometer precision. We discuss the performance of the system with the AuSi LMAIS source and ExB mass separator. We report on initial results from the basic system characterization, ion beam lithography, as well as for basic ion-solid interactions.« less
Probing electric and magnetic fields with a Moiré deflectometer
NASA Astrophysics Data System (ADS)
Lansonneur, P.; Bräunig, P.; Demetrio, A.; Müller, S. R.; Nedelec, P.; Oberthaler, M. K.
2017-08-01
A new contact-free approach for measuring simultaneously electric and magnetic field is reported, which considers the use of a low energy ion source, a set of three transmission gratings and a position sensitive detector. Recently tested with antiprotons (Aghion et al., 2014) [1] at the CERN Antiproton Decelerator facility, this paper extends the proof of principle of a moiré deflectometer (Oberthaler et al., 1996) [2] for distinguishing electric from magnetic fields and opens the route to precision measurements when one is not limited by the ion source intensity. The apparatus presented, whose resolution is mainly limited by the shot noise is able to measure fields as low as 9 mVm-1 Hz-1/2 for electric component and 100 μG Hz-1/2 for the magnetic component. Scaled to 100 nm pitch for the gratings, accessible with current state-of-the-art technology [3], the moiré fieldmeter would be able to measure fields as low as 22 μVm-1 Hz-1/2 and 0.2 μG Hz-1/2.
Spatially-Scanned Dual Comb Spectroscopy for Atmospheric Measurements
NASA Astrophysics Data System (ADS)
Cossel, K.; Waxman, E.; Giorgetta, F.; Cermak, M.; Coddington, I.; Hesselius, D.; Ruben, S.; Swann, W.; Rieker, G. B.; Newbury, N.
2017-12-01
Measuring trace gas emissions from sources that are spatially complex and temporally variable, such as leaking natural gas infrastructure, is challenging with current measurement systems. Here, we present a new technique that provides the path-integrated concentrations of multiple gas species between a ground station and a retroreflector mounted on a small quadcopter. Such a system could provide the ability to quantify small area emissions sources as well measure vertical mixing within the boundary layer. The system is based on a near-infrared dual frequency-comb spectroscopy system (DCS) covering 1.58-1.7 microns, which enables rapid, accurate measurements of CO2, CH4, H2O, and HDO. The eye-safe laser light is launched from a telescope on a fast azimuth, elevation gimbal to a small quadcopter carrying a lightweight retroreflector as well as a high-precision real-time kinematic GPS receiver (for real-time cm-level path length measurements) and pressure, temperature and humidity sensors. Here, we show the results of test measurements from controlled releases of CH4 as well as from test vertical profiles.
Accelerator-based neutrino oscillation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.; /Fermilab
2007-12-01
Neutrino oscillations were first discovered by experiments looking at neutrinos coming from extra-terrestrial sources, namely the sun and the atmosphere, but we will be depending on earth-based sources to take many of the next steps in this field. This article describes what has been learned so far from accelerator-based neutrino oscillation experiments, and then describe very generally what the next accelerator-based steps are. In section 2 the article discusses how one uses an accelerator to make a neutrino beam, in particular, one made from decays in flight of charged pions. There are several different neutrino detection methods currently in use,more » or under development. In section 3 these are presented, with a description of the general concept, an example of such a detector, and then a brief discussion of the outstanding issues associated with this detection technique. Finally, section 4 describes how the measurements of oscillation probabilities are made. This includes a description of the near detector technique and how it can be used to make the most precise measurements of neutrino oscillations.« less
BRDF Calibration of Sintered PTFE in the SWIR
NASA Technical Reports Server (NTRS)
Georgiev, Georgi T.; Butler, James J.
2009-01-01
Satellite instruments operating in the reflective solar wavelength region often require accurate and precise determination of the Bidirectional Reflectance Distribution Function (BRDF) of laboratory-based diffusers used in their pre-flight calibrations and ground-based support of on-orbit remote sensing instruments. The Diffuser Calibration Facility at NASA's Goddard Space Flight Center is a secondary diffuser calibration standard after NEST for over two decades, providing numerous NASA projects with BRDF data in the UV, Visible and the NIR spectral regions. Currently the Diffuser Calibration Facility extended the covered spectral range from 900 nm up to 1.7 microns. The measurements were made using the existing scatterometer by replacing the Si photodiode based receiver with an InGaAs-based one. The BRDF data was recorded at normal incidence and scatter zenith angles from 10 to 60 deg. Tunable coherent light source was setup. Broadband light source application is under development. Gray-scale sintered PTFE samples were used at these first trials, illuminated with P and S polarized incident light. The results are discussed and compared to empirically generated BRDF data from simple model based on 8 deg directional/hemispherical measurements.
A 24 hr global campaign to assess precision timing of the millisecond pulsar J1713+0747
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolch, T.; Lam, M. T.; Cordes, J.
2014-10-10
The radio millisecond pulsar J1713+0747 is regarded as one of the highest-precision clocks in the sky and is regularly timed for the purpose of detecting gravitational waves. The International Pulsar Timing Array Collaboration undertook a 24 hr global observation of PSR J1713+0747 in an effort to better quantify sources of timing noise in this pulsar, particularly on intermediate (1-24 hr) timescales. We observed the pulsar continuously over 24 hr with the Arecibo, Effelsberg, GMRT, Green Bank, LOFAR, Lovell, Nançay, Parkes, and WSRT radio telescopes. The combined pulse times-of-arrival presented here provide an estimate of what sources of timing noise, excludingmore » DM variations, would be present as compared to an idealized √N improvement in timing precision, where N is the number of pulses analyzed. In the case of this particular pulsar, we find that intrinsic pulse phase jitter dominates arrival time precision when the signal-to-noise ratio of single pulses exceeds unity, as measured using the eight telescopes that observed at L band/1.4 GHz. We present first results of specific phenomena probed on the unusually long timescale (for a single continuous observing session) of tens of hours, in particular interstellar scintillation, and discuss the degree to which scintillation and profile evolution affect precision timing. This paper presents the data set as a basis for future, deeper studies.« less
Progress towards Low Energy Neutrino Spectroscopy (LENS)
NASA Astrophysics Data System (ADS)
Blackmon, Jeff
2011-10-01
The Low-Energy Neutrino Spectroscopy (LENS) experiment will precisely measure the energy spectrum of low-energy solar neutrinos via charged-current neutrino reactions on indium. LENS will test solar physics through the fundamental equality of the neutrino fluxes and the precisely known solar luminosity in photons, will probe the metallicity of the solar core through the CNO neutrino fluxes, and will test for the existence of mass-varying neutrinos. The LENS detector concept applies indium-loaded scintillator in an optically-segmented lattice geometry to achieve precise time and spatial resolution and unprecedented sensitivity for low-energy neutrino events. The LENS collaboration is currently developing a prototype, miniLENS, in the Kimballton Underground Research Facility (KURF). The miniLENS program aims to demonstrate the performance and selectivity of the technology and to benchmark Monte Carlo simulations that will guide scaling to the full LENS instrument. We will present the motivation and concept for LENS and will provide an overview of the R&D efforts currently centered around miniLENS at KURF.
Information content analysis: the potential for methane isotopologue retrieval from GOSAT-2
NASA Astrophysics Data System (ADS)
Malina, Edward; Yoshida, Yukio; Matsunaga, Tsuneo; Muller, Jan-Peter
2018-02-01
Atmospheric methane is comprised of multiple isotopic molecules, with the most abundant being 12CH4 and 13CH4, making up 98 and 1.1 % of atmospheric methane respectively. It has been shown that is it possible to distinguish between sources of methane (biogenic methane, e.g. marshland, or abiogenic methane, e.g. fracking) via a ratio of these main methane isotopologues, otherwise known as the δ13C value. δ13C values typically range between -10 and -80 ‰, with abiogenic sources closer to zero and biogenic sources showing more negative values. Initially, we suggest that a δ13C difference of 10 ‰ is sufficient, in order to differentiate between methane source types, based on this we derive that a precision of 0.2 ppbv on 13CH4 retrievals may achieve the target δ13C variance. Using an application of the well-established information content analysis (ICA) technique for assumed clear-sky conditions, this paper shows that using a combination of the shortwave infrared (SWIR) bands on the planned Greenhouse gases Observing SATellite (GOSAT-2) mission, 13CH4 can be measured with sufficient information content to a precision of between 0.7 and 1.2 ppbv from a single sounding (assuming a total column average value of 19.14 ppbv), which can then be reduced to the target precision through spatial and temporal averaging techniques. We therefore suggest that GOSAT-2 can be used to differentiate between methane source types. We find that large unconstrained covariance matrices are required in order to achieve sufficient information content, while the solar zenith angle has limited impact on the information content.
Stelzer, Erin A.; Strickler, Kriston M.; Schill, William B.
2012-01-01
During summer and early fall 2010, 15 river samples and 6 fecal-source samples were collected in West Virginia. These samples were analyzed by three laboratories for three microbial source tracking (MST) markers: AllBac, a general fecal indicator; BacHum, a human-associated fecal indicator; and BoBac, a ruminant-associated fecal indicator. MST markers were analyzed by means of the quantitative polymerase chain reaction (qPCR) method. The aim was to assess interlaboratory precision when the three laboratories used the same MST marker and shared deoxyribonucleic acid (DNA) extracts of the samples, but different equipment, reagents, and analyst experience levels. The term assay refers to both the markers and the procedure differences listed above. Interlaboratory precision was best for all three MST assays when using the geometric mean absolute relative percent difference (ARPD) and Friedman's statistical test as a measure of interlaboratory precision. Adjustment factors (one for each MST assay) were calculated using results from fecal-source samples analyzed by all three laboratories and applied retrospectively to sample concentrations to account for differences in qPCR results among labs using different standards and procedures. Following the application of adjustment factors to qPCR results, ARPDs were lower; however, statistically significant differences between labs were still observed for the BacHum and BoBac assays. This was a small study and two of the MST assays had 52 percent of samples with concentrations at or below the limit of accurate quantification; hence, more testing could be done to determine if the adjustment factors would work better if the majority of sample concentrations were above the quantification limit.
Role of Imaging in the Era of Precision Medicine.
Giardino, Angela; Gupta, Supriya; Olson, Emmi; Sepulveda, Karla; Lenchik, Leon; Ivanidze, Jana; Rakow-Penner, Rebecca; Patel, Midhir J; Subramaniam, Rathan M; Ganeshan, Dhakshinamoorthy
2017-05-01
Precision medicine is an emerging approach for treating medical disorders, which takes into account individual variability in genetic and environmental factors. Preventive or therapeutic interventions can then be directed to those who will benefit most from targeted interventions, thereby maximizing benefits and minimizing costs and complications. Precision medicine is gaining increasing recognition by clinicians, healthcare systems, pharmaceutical companies, patients, and the government. Imaging plays a critical role in precision medicine including screening, early diagnosis, guiding treatment, evaluating response to therapy, and assessing likelihood of disease recurrence. The Association of University Radiologists Radiology Research Alliance Precision Imaging Task Force convened to explore the current and future role of imaging in the era of precision medicine and summarized its finding in this article. We review the increasingly important role of imaging in various oncological and non-oncological disorders. We also highlight the challenges for radiology in the era of precision medicine. Published by Elsevier Inc.
Why precision medicine is not the best route to a healthier world.
Rey-López, Juan Pablo; Sá, Thiago Herick de; Rezende, Leandro Fórnias Machado de
2018-02-05
Precision medicine has been announced as a new health revolution. The term precision implies more accuracy in healthcare and prevention of diseases, which could yield substantial cost savings. However, scientific debate about precision medicine is needed to avoid wasting economic resources and hype. In this commentary, we express the reasons why precision medicine cannot be a health revolution for population health. Advocates of precision medicine neglect the limitations of individual-centred, high-risk strategies (reduced population health impact) and the current crisis of evidence-based medicine. Overrated "precision medicine" promises may be serving vested interests, by dictating priorities in the research agenda and justifying the exorbitant healthcare expenditure in our finance-based medicine. If societies aspire to address strong risk factors for non-communicable diseases (such as air pollution, smoking, poor diets, or physical inactivity), they need less medicine and more investment in population prevention strategies.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg
NASA Astrophysics Data System (ADS)
Dunlop, Michelle
2017-09-01
High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.
Eddy-Current Reference Standard
NASA Technical Reports Server (NTRS)
Ambrose, H. H., Jr.
1985-01-01
Magnetic properties of metallic reference standards duplicated and stabilized for eddy-current coil measurements over long times. Concept uses precisely machined notched samples of known annealed materials as reference standards.
Sloan, D.H.; Yockey, H.P.; Schmidt, F.H.
1959-04-14
An improvement in the mounting arrangement for an ion source within the vacuum tank of a calutron device is reported. The cathode and arc block of the source are independently supported from a stem passing through the tank wall. The arc block may be pivoted and moved longitudinally with respect to the stem to thereby align the arc chamber in the biock with the cathode and magnetic field in the tank. With this arrangement the elements of the ion source are capable of precise adjustment with respect to one another, promoting increased source efficiency.
NASA Astrophysics Data System (ADS)
Redshaw, Matthew
This dissertation describes high precision measurements of atomic masses by measuring the cyclotron frequency of ions trapped singly, or in pairs, in a precision, cryogenic Penning trap. By building on techniques developed at MIT for measuring the cyclotron frequency of single trapped ions, the atomic masses of 84,86Kr, and 129,132,136Xe have been measured to less than a part in 1010 fractional precision. By developing a new technique for measuring the cyclotron frequency ratio of a pair of simultaneously trapped ions, the atomic masses of 28Si, 31P and 32S have been measured to 2 or 3 parts in 10 11. This new technique has also been used to measure the dipole moment of PH+. During the course of these measurements, two significant, but previously unsuspected sources of systematic error were discovered, characterized and eliminated. Extensive tests for other sources of systematic error were performed and are described in detail. The mass measurements presented here provide a significant increase in precision over previous values for these masses, by factors of 3 to 700. The results have a broad range of physics applications: The mass of 136 Xe is important for searches for neutrinoless double-beta-decay; the mass of 28Si is relevant to the re-definition of the artifact kilogram in terms of an atomic mass standard; the masses of 84,86Kr, and 129,132,136Xe provide convenient reference masses for less precise mass spectrometers in diverse fields such as nuclear physics and chemistry; and the dipole moment of PH+ provides a test of molecular structure calculations.
New precision measurements of free neutron beta decay with cold neutrons
Baeßler, Stefan; Bowman, James David; Penttilä, Seppo I.; ...
2014-10-14
Precision measurements in free neutron beta decay serve to determine the coupling constants of beta decay, and offer several stringent tests of the standard model. This study describes the free neutron beta decay program planned for the Fundamental Physics Beamline at the Spallation Neutron Source at Oak Ridge National Laboratory, and finally puts it into the context of other recent and planned measurements of neutron beta decay observables.
Molecular Profiling of Liquid Biopsy Samples for Precision Medicine.
Campos, Camila D M; Jackson, Joshua M; Witek, Małgorzata A; Soper, Steven A
In the context of oncology, liquid biopsies consist of harvesting cancer biomarkers, such as circulating tumor cells, tumor-derived cell-free DNA, and extracellular vesicles, from bodily fluids. These biomarkers provide a source of clinically actionable molecular information that can enable precision medicine. Herein, we review technologies for the molecular profiling of liquid biopsy markers with special emphasis on the analysis of low abundant markers from mixed populations.
High-Precision Distribution of Highly Stable Optical Pulse Trains with 8.8 × 10−19 instability
Ning, B.; Zhang, S. Y.; Hou, D.; Wu, J. T.; Li, Z. B.; Zhao, J. Y.
2014-01-01
The high-precision distribution of optical pulse trains via fibre links has had a considerable impact in many fields. In most published work, the accuracy is still fundamentally limited by unavoidable noise sources, such as thermal and shot noise from conventional photodiodes and thermal noise from mixers. Here, we demonstrate a new high-precision timing distribution system that uses a highly precise phase detector to obviously reduce the effect of these limitations. Instead of using photodiodes and microwave mixers, we use several fibre Sagnac-loop-based optical-microwave phase detectors (OM-PDs) to achieve optical-electrical conversion and phase measurements, thereby suppressing the sources of noise and achieving ultra-high accuracy. The results of a distribution experiment using a 10-km fibre link indicate that our system exhibits a residual instability of 2.0 × 10−15 at1 s and8.8 × 10−19 at 40,000 s and an integrated timing jitter as low as 3.8 fs in a bandwidth of 1 Hz to 100 kHz. This low instability and timing jitter make it possible for our system to be used in the distribution of optical-clock signals or in applications that require extremely accurate frequency/time synchronisation. PMID:24870442
The Effect of Neural Noise on Spike Time Precision in a Detailed CA3 Neuron Model
Kuriscak, Eduard; Marsalek, Petr; Stroffek, Julius; Wünsch, Zdenek
2012-01-01
Experimental and computational studies emphasize the role of the millisecond precision of neuronal spike times as an important coding mechanism for transmitting and representing information in the central nervous system. We investigate the spike time precision of a multicompartmental pyramidal neuron model of the CA3 region of the hippocampus under the influence of various sources of neuronal noise. We describe differences in the contribution to noise originating from voltage-gated ion channels, synaptic vesicle release, and vesicle quantal size. We analyze the effect of interspike intervals and the voltage course preceding the firing of spikes on the spike-timing jitter. The main finding of this study is the ranking of different noise sources according to their contribution to spike time precision. The most influential is synaptic vesicle release noise, causing the spike jitter to vary from 1 ms to 7 ms of a mean value 2.5 ms. Of second importance was the noise incurred by vesicle quantal size variation causing the spike time jitter to vary from 0.03 ms to 0.6 ms. Least influential was the voltage-gated channel noise generating spike jitter from 0.02 ms to 0.15 ms. PMID:22778784
NASA Astrophysics Data System (ADS)
Leen, J. B.; Gupta, M.
2014-12-01
Nitrate contamination in water is a worldwide environmental problem and source apportionment is critical to managing nitrate pollution. Fractionation caused by physical, chemical and biological processes alters the isotope ratios of nitrates (15N/14N, 18O/16O and 17O/16O) and biochemical nitrification and denitrification impart different intramolecular site preference (15N14NO vs. 14N15NO). Additionally, atmospheric nitrate is anomalously enriched in 17O compared to other nitrate sources. The anomaly (Δ17O) is conserved during fractionation processes, providing a tracer of atmospheric nitrate. All of these effects can be used to apportion nitrate in soil. Current technology for measuring nitrate isotopes is complicated and costly - it involves conversion of nitrate to nitrous oxide (N2O), purification, preconcentration and measurement by isotope ratio mass spectrometer (IRMS). Site specific measurements require a custom IRMS. There is a pressing need to make this measurement simpler and more accessible. Los Gatos Research has developed a next generation mid-infrared Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) analyzer to quantify all stable isotope ratios of N2O (δ15N, δ15Nα, δ15Nβ, δ18O, δ17O). We present the latest performance data demonstrating the precision and accuracy of the OA-ICOS based measurement. At an N2O concentration of 322 ppb, the analyzer quantifies [N2O], δ15N, δ15Na, δ15Nb, and δ18O with a precision of ±0.05 ppb, ±0.4 ‰, ±0.45 ‰, and ±0.6 ‰, and ±0.8 ‰ respectively (1σ, 100s; 1σ, 1000s for δ18O). Measurements of gas standards demonstrate accuracy better than ±1 ‰ for isotope ratios over a wide dynamic range (200 - 100,000 ppb). The measurement of δ17O requires a higher concentration (1 - 50 ppm), easily obtainable through conversion of nitrates in water. For 10 ppm of N2O, the instrument achieves a δ17O precision of ±0.05 ‰ (1σ, 1000s). This performance is sufficient to quantify atmospheric nitrate in soil and groundwater and may be used to differentiate other sources of nitrate for which the range of Δ17O values is much smaller. By measuring δ15N, δ15Nα, δ15Nβ, δ18O and δ17O, our instrument will help researchers unravel the complicated nitrate mixing problem to determine the sources and sinks of nitrate pollution.
Microwave tunable laser source: A stable, precision tunable heterodyne local oscillator
NASA Technical Reports Server (NTRS)
Sachse, G. W.
1980-01-01
The development and capabilities of a tunable laser source utilizing a wideband electro-optic modulator and a CO2 laser are described. The precision tunability and high stability of the device are demonstrated with examples of laboratory spectroscopy. Heterodyne measurements are also presented to demonstrate the performance of the laser source as a heterodyne local oscillator. With the use of five CO2 isotope lasers and the 8 to 18 GHz sideband offset tunability of the modulator, calculations indicate that 50 percent spectral coverage in the 9 to 12 micron region is achievable. The wavelength accuracy and stability of this laser source is limited by the CO2 laser and is more than adequate for the measurement of narrow Doppler-broadened line profiles. The room-temperature operating capability and the programmability of the microwave tunable laser source are attractive features for its in-the-field implementation. Although heterodyne measurements indicated some S/N degradation when using the device as a local oscillator, there does not appear to be any fundamental limitation to the heterodyne efficiency of this laser source. Through the use of a lower noise-figure traveling wave tube amplifier and optical matching of the output beam with the photomixer, a substantial increase in the heterodyne S/N is expected.
Machine Learning and Decision Support in Critical Care
Johnson, Alistair E. W.; Ghassemi, Mohammad M.; Nemati, Shamim; Niehaus, Katherine E.; Clifton, David A.; Clifford, Gari D.
2016-01-01
Clinical data management systems typically provide caregiver teams with useful information, derived from large, sometimes highly heterogeneous, data sources that are often changing dynamically. Over the last decade there has been a significant surge in interest in using these data sources, from simply re-using the standard clinical databases for event prediction or decision support, to including dynamic and patient-specific information into clinical monitoring and prediction problems. However, in most cases, commercial clinical databases have been designed to document clinical activity for reporting, liability and billing reasons, rather than for developing new algorithms. With increasing excitement surrounding “secondary use of medical records” and “Big Data” analytics, it is important to understand the limitations of current databases and what needs to change in order to enter an era of “precision medicine.” This review article covers many of the issues involved in the collection and preprocessing of critical care data. The three challenges in critical care are considered: compartmentalization, corruption, and complexity. A range of applications addressing these issues are covered, including the modernization of static acuity scoring; on-line patient tracking; personalized prediction and risk assessment; artifact detection; state estimation; and incorporation of multimodal data sources such as genomic and free text data. PMID:27765959
A fiber-coupled incoherent light source for ultra-precise optical trapping
NASA Astrophysics Data System (ADS)
Menke, Tim; Schittko, Robert; Mazurenko, Anton; Tai, M. Eric; Lukin, Alexander; Rispoli, Matthew; Kaufman, Adam M.; Greiner, Markus
2017-04-01
The ability to engineer arbitrary optical potentials using spatial light modulation has opened up exciting possibilities in ultracold quantum gas experiments. Yet, despite the high trap quality currently achievable, interference-induced distortions caused by scattering along the optical path continue to impede more sensitive measurements. We present a design of a high-power, spatially and temporally incoherent light source that bears the potential to reduce the impact of such distortions. The device is based on an array of non-lasing semiconductor emitters mounted on a single chip whose optical output is coupled into a multi-mode fiber. By populating a large number of fiber modes, the low spatial coherence of the input light is further reduced due to the differing optical path lengths amongst the modes and the short coherence length of the light. In addition to theoretical calculations showcasing the feasibility of this approach, we present experimental measurements verifying the low degree of spatial coherence achievable with such a source, including a detailed analysis of the speckle contrast at the fiber end. We acknowledge support from the National Science Foundation, the Gordon and Betty Moore Foundation's EPiQS Initiative, an Air Force Office of Scientific Research MURI program and an Army Research Office MURI program.
Demonstrating High-Accuracy Orbital Access Using Open-Source Tools
NASA Technical Reports Server (NTRS)
Gilbertson, Christian; Welch, Bryan
2017-01-01
Orbit propagation is fundamental to almost every space-based analysis. Currently, many system analysts use commercial software to predict the future positions of orbiting satellites. This is one of many capabilities that can replicated, with great accuracy, without using expensive, proprietary software. NASAs SCaN (Space Communication and Navigation) Center for Engineering, Networks, Integration, and Communications (SCENIC) project plans to provide its analysis capabilities using a combination of internal and open-source software, allowing for a much greater measure of customization and flexibility, while reducing recurring software license costs. MATLAB and the open-source Orbit Determination Toolbox created by Goddard Space Flight Center (GSFC) were utilized to develop tools with the capability to propagate orbits, perform line-of-sight (LOS) availability analyses, and visualize the results. The developed programs are modular and can be applied for mission planning and viability analysis in a variety of Solar System applications. The tools can perform 2 and N-body orbit propagation, find inter-satellite and satellite to ground station LOS access (accounting for intermediate oblate spheroid body blocking, geometric restrictions of the antenna field-of-view (FOV), and relativistic corrections), and create animations of planetary movement, satellite orbits, and LOS accesses. The code is the basis for SCENICs broad analysis capabilities including dynamic link analysis, dilution-of-precision navigation analysis, and orbital availability calculations.
Classification scheme and prevention measures for caught-in-between occupational fatalities.
Chi, Chia-Fen; Lin, Syuan-Zih
2018-04-01
The current study analyzed 312 caught-in-between fatalities caused by machinery and vehicles. A comprehensive and mutually exclusive coding scheme was developed to analyze and code each caught-in-between fatality in terms of age, gender, experience of the victim, type of industry, source of injury, and causes for these accidents. Boolean algebra analysis was applied on these 312 caught-in-between fatalities to derive minimal cut set (MCS) causes associated with each source of injury. Eventually, contributing factors and common accident patterns associated with (1) special process machinery including textile, printing, packaging machinery, (2) metal, woodworking, and special material machinery, (3) conveyor, (4) vehicle, (5) crane, (6) construction machinery, and (7) elevator can be divided into three major groups through Boolean algebra and MCS analysis. The MCS causes associated with conveyor share the same primary causes as those of the special process machinery including textile, printing, packaging and metal, woodworking, and special material machinery. These fatalities can be eliminated by focusing on the prevention measures associated with lack of safeguards, working on a running machine or process, unintentional activation, unsafe posture or position, unsafe clothing, and defective safeguards. Other precise and effective intervention can be developed based on the identified groups of accident causes associated with each source of injury. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Performance Simulations for a Spaceborne Methane Lidar Mission
NASA Technical Reports Server (NTRS)
Kiemle, C.; Kawa, Stephan Randolph; Quatrevalet, Mathieu; Browell, Edward V.
2014-01-01
Future spaceborne lidar measurements of key anthropogenic greenhouse gases are expected to close current observational gaps particularly over remote, polar, and aerosol-contaminated regions, where actual in situ and passive remote sensing observation techniques have difficulties. For methane, a "Methane Remote Lidar Mission" was proposed by Deutsches Zentrum fuer Luft- und Raumfahrt and Centre National d'Etudes Spatiales in the frame of a German-French climate monitoring initiative. Simulations assess the performance of this mission with the help of Moderate Resolution Imaging Spectroradiometer and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations of the earth's surface albedo and atmospheric optical depth. These are key environmental parameters for integrated path differential absorption lidar which uses the surface backscatter to measure the total atmospheric methane column. Results showthat a lidar with an average optical power of 0.45W at 1.6 µm wavelength and a telescope diameter of 0.55 m, installed on a low Earth orbit platform(506 km), will measure methane columns at precisions of 1.2%, 1.7%, and 2.1% over land, water, and snow or ice surfaces, respectively, for monthly aggregated measurement samples within areas of 50 × 50 km2. Globally, the mean precision for the simulated year 2007 is 1.6%, with a standard deviation of 0.7%. At high latitudes, a lower reflectance due to snow and ice is compensated by denser measurements, owing to the orbital pattern. Over key methane source regions such as densely populated areas, boreal and tropical wetlands, or permafrost, our simulations show that the measurement precision will be between 1 and 2%.
mMass 3: a cross-platform software environment for precise analysis of mass spectrometric data.
Strohalm, Martin; Kavan, Daniel; Novák, Petr; Volný, Michael; Havlícek, Vladimír
2010-06-01
While tools for the automated analysis of MS and LC-MS/MS data are continuously improving, it is still often the case that at the end of an experiment, the mass spectrometrist will spend time carefully examining individual spectra. Current software support is mostly provided only by the instrument vendors, and the available software tools are often instrument-dependent. Here we present a new generation of mMass, a cross-platform environment for the precise analysis of individual mass spectra. The software covers a wide range of processing tasks such as import from various data formats, smoothing, baseline correction, peak picking, deisotoping, charge determination, and recalibration. Functions presented in the earlier versions such as in silico digestion and fragmentation were redesigned and improved. In addition to Mascot, an interface for ProFound has been implemented. A specific tool is available for isotopic pattern modeling to enable precise data validation. The largest available lipid database (from the LIPID MAPS Consortium) has been incorporated and together with the new compound search tool lipids can be rapidly identified. In addition, the user can define custom libraries of compounds and use them analogously. The new version of mMass is based on a stand-alone Python library, which provides the basic functionality for data processing and interpretation. This library can serve as a good starting point for other developers in their projects. Binary distributions of mMass, its source code, a detailed user's guide, and video tutorials are freely available from www.mmass.org .