Data Friction Meets Social Friction: Challenges for standardization in emerging fields of geoscience
NASA Astrophysics Data System (ADS)
Darch, P. T.
2017-12-01
Many interdisciplinary endeavors in the geosciences occur in emergent scientific fields. These fields are often characterized by heterogeneity of methods for production and collection of data, and by data scarcity. This paper presents findings about processes of methods standardization from a long-term case study of an emergent, data-scarce field, the deep subseafloor biosphere. Researchers come from many physical and life science backgrounds to study interactions between microbial life in the seafloor and the physical environment they inhabit. Standardization of methods for collecting data promises multiple benefits to this field, including: Addressing data scarcity through enabling greater data reuse and promoting better interoperability with large scale infrastructures; Fostering stronger collaborative links between researchers distributed across institutions and backgrounds. Ongoing standardization efforts in the field do not only involve scientific judgments about which among a range of methods is most efficient, least biased, or most reliable. Instead, these efforts also encounter multiple difficult social challenges, including: Lack of agreed upon criteria about how to judge competing methods: should efficiency, bias, or reliability take priority?; Lack of resources to carry out the work necessary to determine standards, particularly acute in emergent fields; Concerns that standardization is premature in such a new field, foreclosing the possibility of better methods being developed in the future; Concerns that standardization could prematurely shut down important scientific debates; Concerns among some researchers that their own work may become obsolete should the methods chosen as standard be different from their own. The success of these standardization efforts will depend on addressing both scientific and social dimensions, to ensure widespread acceptance among researchers in the field.
Verification of the ISO calibration method for field pyranometers under tropical sky conditions
NASA Astrophysics Data System (ADS)
Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn
2017-02-01
Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.
Issues concerning international comparison of free-field calibrations of acoustical standards
NASA Astrophysics Data System (ADS)
Nedzelnitsky, Victor
2002-11-01
Primary free-field calibrations of laboratory standard microphones by the reciprocity method establish these microphones as reference standard devices for calibrating working standard microphones, other measuring microphones, and practical instruments such as sound level meters and personal sound exposure meters (noise dosimeters). These primary, secondary, and other calibrations are indispensable to the support of regulatory requirements, standards, and product characterization and quality control procedures important for industry, commerce, health, and safety. International Electrotechnical Commission (IEC) Technical Committee 29 Electroacoustics produces international documentary standards, including standards for primary and secondary free-field calibration and measurement procedures and their critically important application to practical instruments. This paper addresses some issues concerning calibrations, standards activities, and the international key comparison of primary free-field calibrations of IEC-type LS2 laboratory standard microphones that is being planned by the Consultative Committee for Acoustics, Ultrasound, and Vibration (CCAUV) of the International Committee for Weights and Measures (CIPM). This comparison will include free-field calibrations by the reciprocity method at participating major national metrology laboratories throughout the world.
Determination of antenna factors using a three-antenna method at open-field test site
NASA Astrophysics Data System (ADS)
Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao
1992-09-01
Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.
On standardization of low symmetry crystal fields
NASA Astrophysics Data System (ADS)
Gajek, Zbigniew
2015-07-01
Standardization methods of low symmetry - orthorhombic, monoclinic and triclinic - crystal fields are formulated and discussed. Two alternative approaches are presented, the conventional one, based on the second-rank parameters and the standardization based on the fourth-rank parameters. Mainly f-electron systems are considered but some guidelines for d-electron systems and the spin Hamiltonian describing the zero-field splitting are given. The discussion focuses on premises for choosing the most suitable method, in particular on inadequacy of the conventional one. Few examples from the literature illustrate this situation.
Ultrawide-field Fluorescein Angiography for Evaluation of Diabetic Retinopathy
Kong, Mingui; Lee, Mee Yon
2012-01-01
Purpose To investigate the advantages of ultrawide-field fluorescein angiography (FA) over the standard fundus examination in the evaluation of diabetic retinopathy (DR). Methods Ultrawide-field FAs were obtained in 118 eyes of 59 diabetic patients; 11 eyes with no DR, 71 eyes with nonproliferative diabetic retinopathy (NPDR), and 36 eyes with proliferative diabetic retinopathy (PDR), diagnosed by the standard method. The presence of peripheral abnormal lesions beyond the standard seven fields was examined. Results Ultrawide-field FA images demonstrated peripheral microaneurysms in six (54.5%) of 11 eyes with no DR and all eyes with moderate to severe NPDR and PDR. Peripheral retinal neovascularizations were detected in three (4.2%) of 71 eyes with NPDR and in 13 (36.1%) of 36 eyes with PDR. Peripheral vascular nonperfusion and vascular leakage were found in two-thirds of eyes with severe NPDR and PDR. Conclusions Ultrawide-field FA demonstrates peripheral lesions beyond standard fields, which can allow early detection and a close evaluation of DR. PMID:23204797
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2013 CFR
2013-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2014 CFR
2014-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Ray, A.; Key, K.
2017-12-01
Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.
Modified Drop Tower Impact Tests for American Football Helmets.
Rush, G Alston; Prabhu, R; Rush, Gus A; Williams, Lakiesha N; Horstemeyer, M F
2017-02-19
A modified National Operating Committee on Standards for Athletic Equipment (NOCSAE) test method for American football helmet drop impact test standards is presented that would provide better assessment of a helmet's on-field impact performance by including a faceguard on the helmet. In this study, a merger of faceguard and helmet test standards is proposed. The need for a more robust systematic approach to football helmet testing procedures is emphasized by comparing representative results of the Head Injury Criterion (HIC), Severity Index (SI), and peak acceleration values for different helmets at different helmet locations under modified NOCSAE standard drop tower tests. Essentially, these comparative drop test results revealed that the faceguard adds a stiffening kinematic constraint to the shell that lessens total energy absorption. The current NOCSAE standard test methods can be improved to represent on-field helmet hits by attaching the faceguards to helmets and by including two new helmet impact locations (Front Top and Front Top Boss). The reported football helmet test method gives a more accurate representation of a helmet's performance and its ability to mitigate on-field impacts while promoting safer football helmets.
29 CFR Appendix A to Subpart Q of... - References to subpart Q of Part 1926
Code of Federal Regulations, 2010 CFR
2010-07-01
... (ASTM C39-86). • Standard Test Method for Making and Curing Concrete Test Specimens in the Field (ASTM C31-85). • Standard Test Method for Penetration Resistance of Hardened Concrete (ASTM C803-82... (ASTM C873-85). • Standard Method for Developing Early Age Compressive Test Values and Projecting Later...
Hydrogen Field Test Standard: Laboratory and Field Performance
Pope, Jodie G.; Wright, John D.
2015-01-01
The National Institute of Standards and Technology (NIST) developed a prototype field test standard (FTS) that incorporates three test methods that could be used by state weights and measures inspectors to periodically verify the accuracy of retail hydrogen dispensers, much as gasoline dispensers are tested today. The three field test methods are: 1) gravimetric, 2) Pressure, Volume, Temperature (PVT), and 3) master meter. The FTS was tested in NIST's Transient Flow Facility with helium gas and in the field at a hydrogen dispenser location. All three methods agree within 0.57 % and 1.53 % for all test drafts of helium gas in the laboratory setting and of hydrogen gas in the field, respectively. The time required to perform six test drafts is similar for all three methods, ranging from 6 h for the gravimetric and master meter methods to 8 h for the PVT method. The laboratory tests show that 1) it is critical to wait for thermal equilibrium to achieve density measurements in the FTS that meet the desired uncertainty requirements for the PVT and master meter methods; in general, we found a wait time of 20 minutes introduces errors < 0.1 % and < 0.04 % in the PVT and master meter methods, respectively and 2) buoyancy corrections are important for the lowest uncertainty gravimetric measurements. The field tests show that sensor drift can become a largest component of uncertainty that is not present in the laboratory setting. The scale was calibrated after it was set up at the field location. Checks of the calibration throughout testing showed drift of 0.031 %. Calibration of the master meter and the pressure sensors prior to travel to the field location and upon return showed significant drifts in their calibrations; 0.14 % and up to 1.7 %, respectively. This highlights the need for better sensor selection and/or more robust sensor testing prior to putting into field service. All three test methods are capable of being successfully performed in the field and give equivalent answers if proper sensors without drift are used. PMID:26722192
Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control
NASA Astrophysics Data System (ADS)
Hu, Juju; Ke, Qiang; Ji, Yinghua
2018-02-01
The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.
Endoscope field of view measurement.
Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen
2017-03-01
The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOV WS method) in the current ISO 8600-3 standard and proposed a new method (the FOV EP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOV EP method was more accurate than the FOV WS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard.
Endoscope field of view measurement
Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen
2017-01-01
The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOVWS method) in the current ISO 8600-3 standard and proposed a new method (the FOVEP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOVEP method was more accurate than the FOVWS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard. PMID:28663840
40 CFR 60.52Da - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility... opacity field data sheets; (2) For each performance test conducted using Method 22 of appendix A-4 of this... performance test; (iii) Copies of all visible emission observer opacity field data sheets; and (iv...
Preparation and application of in-fibre internal standardization solid-phase microextraction.
Zhao, Wennan; Ouyang, Gangfeng; Pawliszyn, Janusz
2007-03-01
The in-fibre standardization method is a novel approach that has been developed for field sampling/sample preparation, in which an internal standard is pre-loaded onto a solid-phase microextraction (SPME) fibre for calibration of the extraction of target analytes in field samples. The same method can also be used for in-vial sample analysis. In this study, different techniques to load the standard to a non-porous SPME fibre were investigated. It was found that the appropriateness of the technique depends on the physical properties of the standards that are used for the analysis. Headspace extraction of the standard dissolved in pumping oil works well for volatile compounds. Conversely, headspace extraction of the pure standard is an effective approach for semi-volatile compounds. For compounds with low volatility, a syringe-fibre transfer method and direct extraction of the standard dissolved in a solvent exhibited a good reproducibility (<5% RSD). The main advantage of the approaches investigated in this study is that the standard generation vials can be reused for hundreds of analyses without exhibiting significant loss. Moreover, most of the standard loading processes studied can be performed automatically, which is efficient and precise. Finally, the standard loading technique and in-fibre standardization method were applied to a complex matrix (milk) and the results illustrated that the matrix effect can be effectively compensated for with this approach.
This SOP describes the method for conducting internal field audits and quality control procedures. Internal field audits will be conducted to ensure the collection of high quality data. Internal field audits will be conducted by Field Auditors (the Field QA Officer and the Field...
Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...
2016-11-02
An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less
US Fish and Wildlife Service biomonitoring operations manual, Appendices A--K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gianotto, D.F.; Rope, R.C.; Mondecar, M.
1993-04-01
Volume 2 contains Appendices and Summary Sheets for the following areas: A-Legislative Background and Key to Relevant Legislation, B- Biomonitoring Operations Workbook, C-Air Monitoring, D-Introduction to the Flora and Fauna for Biomonitoring, E-Decontamination Guidance Reference Field Methods, F-Documentation Guidance, Sample Handling, and Quality Assurance/Quality Control Standard Operating Procedures, G-Field Instrument Measurements Reference Field Methods, H-Ground Water Sampling Reference Field Methods, I-Sediment Sampling Reference Field Methods, J-Soil Sampling Reference Field Methods, K-Surface Water Reference Field Methods. Appendix B explains how to set up strategy to enter information on the ``disk workbook``. Appendix B is enhanced by DE97006389, an on-line workbook formore » users to be able to make revisions to their own biomonitoring data.« less
WOODSTOVE EMISSION MEASUREMENT METHODS COMPARISON AND EMISSION FACTORS UPDATE
This paper compares various field and laboratory woodstove emission measurement methods. n 1988, the U.S. EPA promulgated performance standards for residential wood heaters (woodstoves). ver the past several years, a number of field studies have been undertaken to determine the a...
Hearing Aid–Related Standards and Test Systems
Ravn, Gert; Preves, David
2015-01-01
Many documents describe standardized methods and standard equipment requirements in the field of audiology and hearing aids. These standards will ensure a uniform level and a high quality of both the methods and equipment used in audiological work. The standards create the basis for measuring performance in a reproducible manner and independent from how and when and by whom parameters have been measured. This article explains, and focuses on, relevant acoustic and electromagnetic compatibility parameters and describes several test systems available. PMID:27516709
Field manual for the collection of Navajo Nation streamflow-gage data
Hart, Robert J.; Fisk, Gregory G.
2014-01-01
The Field Manual for the Collection of Navajo Nation Streamflow-Gage Data (Navajo Field Manual) is based on established (standard) U.S. Geological Survey streamflow-gaging methods and provides guidelines specifically designed for the Navajo Department of Water Resources personnel who establish and maintain streamflow gages. The Navajo Field Manual addresses field visits, including essential field equipment and the selection of and routine visits to streamflow-gaging stations, examines surveying methods for determining peak flows (indirect measurements), discusses safety considerations, and defines basic terms.
NASA Astrophysics Data System (ADS)
Yi, Chen; Isaev, A. E.; Yuebing, Wang; Enyakov, A. M.; Teng, Fei; Matveev, A. N.
2011-01-01
A description is given of the COOMET project 473/RU-a/09: a pilot comparison of hydrophone calibrations at frequencies from 250 Hz to 200 kHz between Hangzhou Applied Acoustics Research Institute (HAARI, China)—pilot laboratory—and Russian National Research Institute for Physicotechnical and Radio Engineering Measurements (VNIIFTRI, Designated Institute of Russia of the CIPM MRA). Two standard hydrophones, B&K 8104 and TC 4033, were calibrated and compared to assess the current state of hydrophone calibration of HAARI (China) and Russia. Three different calibration methods were applied: a vibrating column method, a free-field reciprocity method and a comparison method. The standard facilities of each laboratory were used, and three different sound fields were applied: pressure field, free-field and reverberant field. The maximum deviation of the sensitivities of two hydrophones between the participants' results was 0.36 dB. Main text. To reach the main text of this paper, click on Final Report. The final report has been peer-reviewed and approved for publication by the CCAUV-KCWG.
Comparison of Field Methods and Models to Estimate Mean Crown Diameter
William A. Bechtold; Manfred E. Mielke; Stanley J. Zarnoch
2002-01-01
The direct measurement of crown diameters with logger's tapes adds significantly to the cost of extensive forest inventories. We undertook a study of 100 trees to compare this measurement method to four alternatives-two field instruments, ocular estimates, and regression models. Using the taping method as the standard of comparison, accuracy of the tested...
Coherent and Semiclassical States of a Charged Particle in a Constant Electric Field
NASA Astrophysics Data System (ADS)
Adorno, T. C.; Pereira, A. S.
2018-05-01
The method of integrals of motion is used to construct families of generalized coherent states of a nonrelativistic spinless charged particle in a constant electric field. Families of states, differing in the values of their standard deviations at the initial time, are obtained. Depending on the initial values of the standard deviations, and also on the electric field, it turns out to be possible to identify some families with semiclassical states.
Bolinsson, Hans; Lu, Yi; Hall, Stephen; Nilsson, Lars; Håkansson, Andreas
2018-01-19
This study suggests a novel method for determination of the channel height in asymmetrical flow field-flow fractionation (AF4), which can be used for calibration of the channel for hydrodynamic radius determinations. The novel method uses an oil-in-water nanoemulsion together with multi angle light scattering (MALS) and elution theory to determine channel height from an AF4 experiment. The method is validated using two orthogonal methods; first, by using standard particle elution experiments and, secondly, by imaging an assembled and carrier liquid filled channel by x-ray computed tomography (XCT). It is concluded that the channel height can be determined with approximately the same accuracy as with the traditional channel height determination technique. However, the nanoemulsion method can be used under more challenging conditions than standard particles, as the nanoemulsion remains stable in a wider pH range than the previously used standard particles. Moreover, the novel method is also more cost effective. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation and Field Assessment of Bifacial Photovoltaic Module Power Rating Methodologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deline, Chris; MacAlpine, Sara; Marion, Bill
2016-11-21
1-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 Wm-2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing 1-sun irradiance standards leads to a bifacial reference condition of 1000 Wm-2 Gfront and 130-140 Wm-2 Grear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity will be affected by self-shade frommore » adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this work. Here, we compare outdoor field measurements of bifacial modules with irradiance on both sides with proposed indoor test methods where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that the indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module types.« less
Evaluation and Field Assessment of Bifacial Photovoltaic Module Power Rating Methodologies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deline, Chris; MacAlpine, Sara; Marion, Bill
2016-06-16
1-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 Wm-2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing 1-sun irradiance standards leads to a bifacial reference condition of 1000 Wm-2 Gfront and 130-140 Wm-2 Grear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity will be affected by self-shade frommore » adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this work. Here, we compare outdoor field measurements of bifacial modules with irradiance on both sides with proposed indoor test methods where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that the indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Neil, E-mail: neil.richmond@stees.nhs.uk; Brackenridge, Robert
2014-04-01
Tissue-phantom ratios (TPRs) are a common dosimetric quantity used to describe the change in dose with depth in tissue. These can be challenging and time consuming to measure. The conversion of percentage depth dose (PDD) data using standard formulae is widely employed as an alternative method in generating TPR. However, the applicability of these formulae for small fields has been questioned in the literature. Functional representation has also been proposed for small-field TPR production. This article compares measured TPR data for small 6 MV photon fields against that generated by conversion of PDD using standard formulae to assess the efficacymore » of the conversion data. By functionally fitting the measured TPR data for square fields greater than 4 cm in length, the TPR curves for smaller fields are generated and compared with measurements. TPRs and PDDs were measured in a water tank for a range of square field sizes. The PDDs were converted to TPRs using standard formulae. TPRs for fields of 4 × 4 cm{sup 2} and larger were used to create functional fits. The parameterization coefficients were used to construct extrapolated TPR curves for 1 × 1 cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields. The TPR data generated using standard formulae were in excellent agreement with direct TPR measurements. The TPR data for 1 × 1-cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields created by extrapolation of the larger field functional fits gave inaccurate initial results. The corresponding mean differences for the 3 fields were 4.0%, 2.0%, and 0.9%. Generation of TPR data using a standard PDD-conversion methodology has been shown to give good agreement with our directly measured data for small fields. However, extrapolation of TPR data using the functional fit to fields of 4 × 4 cm{sup 2} or larger resulted in generation of TPR curves that did not compare well with the measured data.« less
CTEPP STANDARD OPERATING PROCEDURE FOR SETTING UP A HOUSEHOLD SAMPLING SCHEDULE (SOP-2.10)
This SOP describes the method for scheduling study subjects for field sampling activities in North Carolina (NC) and Ohio (OH). There are three field sampling teams with two staff members on each team. Two field sampling teams collect the field data simultaneously. A third fiel...
NHEXAS PHASE I MARYLAND STUDY--STANDARD OPERATING PROCEDURE FOR TRAINING OF FIELD TECHNICIANS (G07)
The purpose of this SOP is to describe the method used for training field technicians. The SOP outlines the responsibilities of the Field Technician (FT) and the Field Coordination Center Supervisor (FCC-S) before, during, and after sampling at residences, and the training syste...
Melanins and melanogenesis: methods, standards, protocols.
d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke
2013-09-01
Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
Quality assurance, training, and certification in ozone air pollution studies
Susan Schilling; Paul Miller; Brent Takemoto
1996-01-01
Uniform, or standard, measurement methods of data are critical to projects monitoring change to forest systems. Standardized methods, with known or estimable errors, contribute greatly to the confidence associated with decisions on the basis of field data collections (Zedaker and Nicholas 1990). Quality assurance (QA) for the measurement process includes operations and...
OBJECTIVE: Devise a method to standardize responses of cells to MF-exposure in different incubator environments. METHODS: We compared the cell responses to generated MF in a standard cell-culture incubator (Forma, model #3158) with cell responses to the same exposure when a mu-m...
Evaluation of Breast Sentinel Lymph Node Coverage by Standard Radiation Therapy Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabinovitch, Rachel; Ballonoff, Ari; Newman, Francis M.S.
2008-04-01
Background: Biopsy of the breast sentinel lymph node (SLN) is now a standard staging procedure for early-stage invasive breast cancer. The anatomic location of the breast SLN and its relationship to standard radiation fields has not been described. Methods and Materials: A retrospective review of radiotherapy treatment planning data sets was performed in patients with breast cancer who had undergone SLN biopsy, and those with a surgical clip at the SLN biopsy site were identified. The location of the clip was evaluated relative to vertebral body level on an anterior-posterior digitally reconstructed radiograph, treated whole-breast tangential radiation fields, and standardmore » axillary fields in 106 data sets meeting these criteria. Results: The breast SLN varied in vertebral body level position, ranging from T2 to T7 but most commonly opposite T4. The SLN clip was located below the base of the clavicle in 90%, and hence would be excluded from standard axillary radiotherapy fields where the inferior border is placed at this level. The clip was within the irradiated whole-breast tangent fields in 78%, beneath the superior-posterior corner multileaf collimators in 12%, and outside the tangent field borders in 10%. Conclusions: Standard axillary fields do not encompass the lymph nodes at highest risk of containing tumor in breast cancer patients. Elimination of the superior-posterior corner MLCs from the tangent field design would result in inclusion of the breast SLN in 90% of patients treated with standard whole-breast irradiation.« less
Francy, D.S.; Hart, T.L.; Virosteck, C.M.
1996-01-01
Bacterial injury, survival, and regrowth were investigated by use of replicate flow-through incubation chambers placed in the Cuyahoga River or Lake Erie in the greater Cleveland metropolitan area during seven 4-day field studies. The chambers contained wastewater or combined-sewer-overflow (CSO) effluents treated three ways-unchlorinated, chlorinated, and dechlorinated. At timestep intervals, the chamber contents were analyzed for concentrations of injured and healthy fecal coliforms by use of standard selective and enhanced-recovery membrane-filtration methods. Mean percent injuries and survivals were calculated from the fecal-coliform concentration data for each field study. The results of analysis of variance (ANOVA) indicated that treatment affected mean percent injury and survival, whereas site did not. In the warm-weather Lake Erie field study, but not in the warm-weather Cuyahoga River studies, the results of ANOVA indicated that dechlorination enhanced the repair of injuries and regrowth of chlorine-injured fecal coliforms on culture media over chlorination alone. The results of ANOVA on the percent injury from CSO effluent field studies indicated that dechlorination reduced the ability of organisms to recover and regrow on culture media over chlorination alone. However, because of atypical patterns of concentration increases and decreases in some CSO effluent samples, more work needs to be done before the effect of dechlorination and chlorination on reducing fecal-coliform concentrations in CSO effluents can be confirmed. The results of ANOVA on percent survivals found statistically significant differences among the three treatment methods for all but one study. Dechlorination was found to be less effective than chlorination alone in reducing the survival of fecal coliforms in wastewater effluent, but not in CSO effluent. If the concentration of fecal coliforms determined by use of the enhanced-recovery method can be predicted accurately from the concentration found by use of the standard method, then increased monitoring and expense to detect chlorine-injured organisms would be unnecessary. The results of linear regression analysis, however, indicated that the relation between enhanced-recovery and standard-method concentrations was best represented when the data were grouped by treatment. The model generated from linear regression of the unchlorinated data set provided an accurate estimate of enhanced-recovery concentrations from standard-method concentrations, whereas the models generated from the chlorinated and dechlorinated data sets did not. In addition, evaluation of fecal-coliform concentrations found in field studies in terms of Ohio recreational water-quality standards showed that concentrations obtained by standard and enhanced-recovery methods were not comparable. Sample treatment and analysis methods were found to affect the percentage of samples meeting and exceeding Ohio's bathing-water, primary-contact, and secondary-contact standards. Therefore, determining the health risk of swimming in receiving waters was often difficult without information on enhanced-recovery method concentrations and was especially difficult in waters receiving high proportions of chlorinated or dechlorinated effluents.
Brown, Richard J C; Beccaceci, Sonya; Butterfield, David M; Quincey, Paul G; Harris, Peter M; Maggos, Thomas; Panteliadis, Pavlos; John, Astrid; Jedynska, Aleksandra; Kuhlbusch, Thomas A J; Putaud, Jean-Philippe; Karanasiou, Angeliki
2017-10-18
The European Committee for Standardisation (CEN) Technical Committee 264 'Air Quality' has recently produced a standard method for the measurements of organic carbon and elemental carbon in PM 2.5 within its working group 35 in response to the requirements of European Directive 2008/50/EC. It is expected that this method will be used in future by all Member States making measurements of the carbonaceous content of PM 2.5 . This paper details the results of a laboratory and field measurement campaign and the statistical analysis performed to validate the standard method, assess its uncertainty and define its working range to provide clarity and confidence in the underpinning science for future users of the method. The statistical analysis showed that the expanded combined uncertainty for transmittance protocol measurements of OC, EC and TC is expected to be below 25%, at the 95% level of confidence, above filter loadings of 2 μg cm -2 . An estimation of the detection limit of the method for total carbon was 2 μg cm -2 . As a result of the laboratory and field measurement campaign the EUSAAR2 transmittance measurement protocol was chosen as the basis of the standard method EN 16909:2017.
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
Non-perturbative background field calculations
NASA Astrophysics Data System (ADS)
Stephens, C. R.
1988-01-01
New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation—perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation.
Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris
2015-07-17
Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.
Samuel V. Glass; Stanley D. Gatland II; Kohta Ueno; Christopher J. Schumacher
2017-01-01
ASHRAE Standard 160, Criteria for Moisture-Control Design Analysis in Buildings, was published in 2009. The standard sets criteria for moisture design loads, hygrothermal analysis methods, and satisfactory moisture performance of the building envelope. One of the evaluation criteria specifies conditions necessary to avoid mold growth. The current standard requires that...
Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.
2017-01-01
AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.
A field protocol to monitor cavity-nesting birds
J. Dudley; V. Saab
2003-01-01
We developed a field protocol to monitor populations of cavity-nesting birds in burned and unburned coniferous forests of western North America. Standardized field methods are described for implementing long-term monitoring strategies and for conducting field research to evaluate the effects of habitat change on cavity-nesting birds. Key references (but not...
In July 1997, EPA promulgated a new National Ambient Air Quality Standard (NAAQS) for fine particulate matter (PM2.5). This new standard was based on collection of an integrated mass sample on a filter. Field studies have demonstrated that the collection of semivolatile compoun...
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Magnetic Fields: Visible and Permanent.
ERIC Educational Resources Information Center
Winkeljohn, Dorothy R.; Earl, Robert D.
1983-01-01
Children will be able to see the concept of a magnetic field translated into a visible reality using the simple method outlined. Standard shelf paper, magnets, iron filings, and paint in a spray can are used to prepare a permanent and well-detailed picture of the magnetic field. (Author/JN)
NASA Astrophysics Data System (ADS)
Chen, Z.; Jones, C. M.
2002-05-01
Microchemistry of fish otoliths (fish ear bones) is a very useful tool for monitoring aquatic environments and fish migration. However, determination of the elemental composition in fish otolith by ICP-MS has been limited to either analysis of dissolved sample solution or measurement of limited number of trace elements by laser ablation (LA)- ICP-MS due to low sensitivity, lack of available calibration standards, and complexity of polyatomic molecular interference. In this study, a method was developed for in situ determination of trace elements in fish otoliths by laser ablation double focusing sector field ultra high sensitivity Finnigan Element 2 ICP-MS using a solution standard addition calibration method. Due to the lack of matrix-match solid calibration standards, sixteen trace elements (Na, Mg, P, Cr, Mn, Fe, Ni, Cu, Rb, Sr, Y, Cd, La, Ba, Pb and U) were determined using a solution standard calibration with Ca as an internal standard. Flexibility, easy preparation and stable signals are the advantages of using solution calibration standards. In order to resolve polyatomic molecular interferences, medium resolution (M/delta M > 4000) was used for some elements (Na, Mg, P, Cr, Mn, Fe, Ni, and Cu). Both external calibration and standard addition quantification strategies are compared and discussed. Precision, accuracy, and limits of detection are presented.
Sauer, Vernon B.
2002-01-01
Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as well as archiving, quality assurance, and quality control.
Scattering of cylindrical electric field waves from an elliptical dielectric cylindrical shell
NASA Astrophysics Data System (ADS)
Urbanik, E. A.
1982-12-01
This thesis examines the scattering of cylindrical waves by large dielectric scatterers of elliptic cross section. The solution method was the method of moments using a Galerkin approach. Sinusoidal basis and testing functions were used resulting in a higher convergence rate. The higher rate of convergence made it possible for the program to run on the Aeronautical Systems Division's CYBER computers without any special storage methods. This report includes discussion on moment methods, solution of integral equations, and the relationship between the electric field and the source region or self cell singularity. Since the program produced unacceptable run times, no results are contained herein. The importance of this work is the evaluation of the practicality of moment methods using standard techniques. The long run times for a mid-sized scatterer demonstrate the impracticality of moment methods for dielectrics using standard techniques.
Geotechnical Descriptions of Rock and Rock Masses.
1985-04-01
determined in the field on core speci ns by the standard Rock Testing Handbook Methods . afls GA DTIC TAB thannounod 13 Justifiatlo By Distributin...to provide rock strength descriptions from the field. The point-load test has proven to be a reliable method of determining rock strength properties...report should qualify the reported spacing values by stating the methods used to determine spacing. Preferably the report should make the determination
Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris
2015-01-01
Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation. PMID:26182891
Field methods and data processing techniques associated with mapped inventory plots
William A. Bechtold; Stanley J. Zarnoch
1999-01-01
The U.S. Forest Inventory and Analysis (FIA) and Forest Health Monitoring (FHM) programs utilize a fixed-area mapped-plot design as the national standard for extensive forest inventories. The mapped-plot design is explained, as well as the rationale for its selection as the national standard. Ratio-of-means estimators am presented as a method to process data from...
Towards standardized assessment of endoscope optical performance: geometric distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua
2013-12-01
Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.
A standard telemental health evaluation model: the time is now.
Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A
2012-05-01
The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
A method for removing arm backscatter from EPID images
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Brian W.; Greer, Peter B.; School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, New South Wales 2308
2013-07-15
Purpose: To develop a method for removing the support arm backscatter from images acquired using current Varian electronic portal imaging devices (EPIDs).Methods: The effect of arm backscatter on EPID images was modeled using a kernel convolution method. The parameters of the model were optimized by comparing on-arm images to off-arm images. The model was used to develop a method to remove the effect of backscatter from measured EPID images. The performance of the backscatter removal method was tested by comparing backscatter corrected on-arm images to measured off-arm images for 17 rectangular fields of different sizes and locations on the imager.more » The method was also tested using on- and off-arm images from 42 intensity modulated radiotherapy (IMRT) fields.Results: Images generated by the backscatter removal method gave consistently better agreement with off-arm images than images without backscatter correction. For the 17 rectangular fields studied, the root mean square difference of in-plane profiles compared to off-arm profiles was reduced from 1.19% (standard deviation 0.59%) on average without backscatter removal to 0.38% (standard deviation 0.18%) when using the backscatter removal method. When comparing to the off-arm images from the 42 IMRT fields, the mean {gamma} and percentage of pixels with {gamma} < 1 were improved by the backscatter removal method in all but one of the images studied. The mean {gamma} value (1%, 1 mm) for the IMRT fields studied was reduced from 0.80 to 0.57 by using the backscatter removal method, while the mean {gamma} pass rate was increased from 72.2% to 84.6%.Conclusions: A backscatter removal method has been developed to estimate the image acquired by the EPID without any arm backscatter from an image acquired in the presence of arm backscatter. The method has been shown to produce consistently reliable results for a wide range of field sizes and jaw configurations.« less
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
DOT National Transportation Integrated Search
2015-11-01
Several national standards and specification have been developed for design, installation, : and materials for precast concrete pipe, corrugated metal pipe, and HDPE pipes. However, : no national accepted installation standard or design method is ava...
Simulating the electrohydrodynamics of a viscous droplet
NASA Astrophysics Data System (ADS)
Theillard, Maxime; Saintillan, David
2016-11-01
We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Measuring and monitoring biological diversity: Standard methods for mammals
Wilson, Don E.; Cole, F. Russell; Nichols, James D.; Rudran, Rasanayagam; Foster, Mercedes S.
1996-01-01
Measuring and Monitoring Biological Diversity: Standard Methods for Mammals provides a comprehensive manual for designing and implementing inventories of mammalian biodiversity anywhere in the world and for any group, from rodents to open-country grazers. The book emphasizes formal estimation approaches, which supply data that can be compared across habitats and over time. Beginning with brief natural histories of the twenty-six orders of living mammals, the book details the field techniques—observation, capture, and sign interpretation—appropriate to different species. The contributors provide guidelines for study design, discuss survey planning, describe statistical techniques, and outline methods of translating field data into electronic formats. Extensive appendixes address such issues as the ethical treatment of animals in research, human health concerns, preserving voucher specimens, and assessing age, sex, and reproductive condition in mammals.Useful in both developed and developing countries, this volume and the Biological Diversity Handbook Series as a whole establish essential standards for a key aspect of conservation biology and resource management.
Statistical analysis of loopy belief propagation in random fields
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki
2015-10-01
Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.
This data set contains the method performance results. This includes field blanks, method blanks, duplicate samples, analytical duplicates, matrix spikes, and surrogate recovery standards.
The Children’s Total Exposure to Persistent Pesticides and Other Persistent Pollutant (...
A Novel Field Deployable Point-of-Care Diagnostic Test for Cutaneous Leishmaniasis
2015-10-01
include localized cutaneous leishmaniasis (LCL), and destructive nasal and oropharyngeal lesions of mucosal leishmaniasis (ML). LCL in the New World...the high costs, personnel training and need of sophisticated equipment. Therefore, novel methods to detect leishmaniasis at the POC are urgently needed...To date, there is no field-standardized molecular method based on DNA amplification coupled with Lateral Flow reading to detect leishmaniasis
A standard for measuring metadata quality in spectral libraries
NASA Astrophysics Data System (ADS)
Rasaiah, B.; Jones, S. D.; Bellman, C.
2013-12-01
A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480] BIOGEOSCIENCES / Remote sensing [1904] INFORMATICS / Community standards [1912] INFORMATICS / Data management, preservation, rescue [1926] INFORMATICS / Geospatial [1930] INFORMATICS / Data and information governance [1946] INFORMATICS / Metadata [1952] INFORMATICS / Modeling [1976] INFORMATICS / Software tools and services [9810] GENERAL OR MISCELLANEOUS / New fields
Off disk-center potential field calculations using vector magnetograms
NASA Technical Reports Server (NTRS)
Venkatakrishnan, P.; Gary, G. Allen
1989-01-01
A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.
Standards for Cell Line Authentication and Beyond
Cole, Kenneth D.; Plant, Anne L.
2016-01-01
Different genomic technologies have been applied to cell line authentication, but only one method (short tandem repeat [STR] profiling) has been the subject of a comprehensive and definitive standard (ASN-0002). Here we discuss the power of this document and why standards such as this are so critical for establishing the consensus technical criteria and practices that can enable progress in the fields of research that use cell lines. We also examine other methods that could be used for authentication and discuss how a combination of methods could be used in a holistic fashion to assess various critical aspects of the quality of cell lines. PMID:27300367
ERIC Educational Resources Information Center
Cramer, Stephen E.
A standard-setting procedure was developed for the Georgia Teacher Certification Testing Program as tests in 30 teaching fields were revised. A list of important characteristics of a standard-setting procedure was derived, drawing on the work of R. A. Berk (1986). The best method was found to be a highly formalized judgmental, empirical Angoff…
A need for standardization in visual acuity measurement.
Patel, Hina; Congdon, Nathan; Strauss, Glenn; Lansingh, Charles
2017-01-01
Standardization of terminologies and methods is increasingly important in all fields including ophthalmology, especially currently when research and new technology are rapidly driving improvements in medicine. This review highlights the range of notations used by vision care professionals around the world for vision measurement, and the challenges resulting from this practice. The global community is urged to move toward a uniform standard.
Time and frequency technology at NIST
NASA Technical Reports Server (NTRS)
Sullivan, D. B.
1994-01-01
The state of development of advanced timing systems at NIST is described. The work on cesium and rubidium frequency standards, stored-ion frequency standards, diode lasers used to pump such standards, time transfer, and methods for characterizing clocks, oscillators, and time distribution systems is presented. The emphasis is on NIST-developed technology rather than the general state of the art in this field.
NASA Astrophysics Data System (ADS)
Iwahashi, Masahiro; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa
2017-03-01
This study proposes a method to evaluate the electric field induced in the brain by transcranial magnetic stimulation (TMS) to realize focal stimulation in the target area considering the inter-subject difference of the brain anatomy. The TMS is a non-invasive technique used for treatment/diagnosis, and it works by inducing an electric field in a specific area of the brain via a coil-induced magnetic field. Recent studies that report on the electric field distribution in the brain induced by TMS coils have been limited to simplified human brain models or a small number of detailed human brain models. Until now, no method has been developed that appropriately evaluates the coil performance for a group of subjects. In this study, we first compare the magnetic field and the magnetic vector potential distributions to determine if they can be used as predictors of the TMS focality derived from the electric field distribution. Next, the hotspots of the electric field on the brain surface of ten subjects using six coils are compared. Further, decisive physical factors affecting the focality of the induced electric field by different coils are discussed by registering the computed electric field in a standard brain space for the first time, so as to evaluate coil characteristics for a large population of subjects. The computational results suggest that the induced electric field in the target area cannot be generalized without considering the morphological variability of the human brain. Moreover, there was no remarkable difference between the various coils, although focality could be improved to a certain extent by modifying the coil design (e.g., coil radius). Finally, the focality estimated by the electric field was more correlated with the magnetic vector potential than the magnetic field in a homogeneous sphere.
Iwahashi, Masahiro; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa
2017-03-21
This study proposes a method to evaluate the electric field induced in the brain by transcranial magnetic stimulation (TMS) to realize focal stimulation in the target area considering the inter-subject difference of the brain anatomy. The TMS is a non-invasive technique used for treatment/diagnosis, and it works by inducing an electric field in a specific area of the brain via a coil-induced magnetic field. Recent studies that report on the electric field distribution in the brain induced by TMS coils have been limited to simplified human brain models or a small number of detailed human brain models. Until now, no method has been developed that appropriately evaluates the coil performance for a group of subjects. In this study, we first compare the magnetic field and the magnetic vector potential distributions to determine if they can be used as predictors of the TMS focality derived from the electric field distribution. Next, the hotspots of the electric field on the brain surface of ten subjects using six coils are compared. Further, decisive physical factors affecting the focality of the induced electric field by different coils are discussed by registering the computed electric field in a standard brain space for the first time, so as to evaluate coil characteristics for a large population of subjects. The computational results suggest that the induced electric field in the target area cannot be generalized without considering the morphological variability of the human brain. Moreover, there was no remarkable difference between the various coils, although focality could be improved to a certain extent by modifying the coil design (e.g., coil radius). Finally, the focality estimated by the electric field was more correlated with the magnetic vector potential than the magnetic field in a homogeneous sphere.
Fluctuating local field method probed for a description of small classical correlated lattices
NASA Astrophysics Data System (ADS)
Rubtsov, Alexey N.
2018-05-01
Thermal-equilibrated finite classical lattices are considered as a minimal model of the systems showing an interplay between low-energy collective fluctuations and single-site degrees of freedom. Standard local field approach, as well as classical limit of the bosonic DMFT method, do not provide a satisfactory description of Ising and Heisenberg small lattices subjected to an external polarizing field. We show that a dramatic improvement can be achieved within a simple approach, in which the local field appears to be a fluctuating quantity related to the low-energy degree(s) of freedom.
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.
1996-01-01
The Integrated Force Method has been developed in recent years for the analysis of structural mechanics problems. This method treats all independent internal forces as unknown variables that can be calculated by simultaneously imposing equations of equilibrium and compatibility conditions. In this paper a finite element library for analyzing two-dimensional problems by the Integrated Force Method is presented. Triangular- and quadrilateral-shaped elements capable of modeling arbitrary domain configurations are presented. The element equilibrium and flexibility matrices are derived by discretizing the expressions for potential and complementary energies, respectively. The displacement and stress fields within the finite elements are independently approximated. The displacement field is interpolated as it is in the standard displacement method, and the stress field is approximated by using complete polynomials of the correct order. A procedure that uses the definitions of stress components in terms of an Airy stress function is developed to derive the stress interpolation polynomials. Such derived stress fields identically satisfy the equations of equilibrium. Moreover, the resulting element matrices are insensitive to the orientation of local coordinate systems. A method is devised to calculate the number of rigid body modes, and the present elements are shown to be free of spurious zero-energy modes. A number of example problems are solved by using the present library, and the results are compared with corresponding analytical solutions and with results from the standard displacement finite element method. The Integrated Force Method not only gives results that agree well with analytical and displacement method results but also outperforms the displacement method in stress calculations.
This data set contains the method performance results for CTEPP-OH. This includes field blanks, method blanks, duplicate samples, analytical duplicates, matrix spikes, and surrogate recovery standards.
The Children’s Total Exposure to Persistent Pesticides and Other Persisten...
Determination of traces of cobalt in soils: A field method
Almond, H.
1953-01-01
The growing use of geochemical prospecting methods in the search for ore deposits has led to the development of a field method for the determination of cobalt in soils. The determination is based on the fact that cobalt reacts with 2-nitroso-1-naphthol to yield a pink compound that is soluble in carbon tetrachloride. The carbon tetrachloride extract is shaken with dilute cyanide to complex interfering elements and to remove excess reagent. The cobalt content is estimated by comparing the pink color in the carbon tetrachloride with a standard series prepared from standard solutions. The cobalt 2-nitroso-1-naphtholate system in carbon tetrachloride follows Beer's law. As little as 1 p.p.m. can be determined in a 0.1-gram sample. The method is simple and fast and requires only simple equipment. More than 40 samples can be analyzed per man-day with an accuracy within 30% or better.
MULTI-SITE FIELD EVALUATION OF CANDIDATE SAMPLERS FOR MEASURING COARSE-MODE PM
In response to expected changes to the National Ambient Air Quality Standards for particulate matter, comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring coarse mode aerosols (i.e. PMc). Five separate PMc sampling approaches w...
Do we really need in-situ bioassays?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, M.H.; Salazar, S.M.
1995-12-31
In-situ bioassays are needed to validate the results from laboratory testing and to understand biological interactions. Standard laboratory protocols provide reproducible test results, and the precision of those tests can be mathematically defined. Significant correlations between toxic substances and levels of response (bioaccumulation and bioeffects) have also been demonstrated with natural field populations and suggest that the laboratory results can accurately predict field responses. An equal number of studies have shown a lack of correlation between laboratory bioassay results and responses of natural field populations. The best way to validate laboratory results is with manipulative field testing; i.e., in-situ bioassaysmore » with caged organisms. Bioaccumulation in transplanted bivalves has probably been the most frequently used form of an in-situ bioassay. The authors have refined those methods to include synoptic measurements of bioaccumulation and growth. Growth provides an easily-measured bioeffects endpoint and a means of calibrating bioaccumulation. Emphasis has been on minimizing the size range of test animals, repetitive measurements of individuals and standardization of test protocols for a variety of applications. They are now attempting to standardize criteria for accepting and interpreting data in the same way that laboratory bioassays have been standardized. Others have developed methods for in-situ bioassays using eggs, larvae, unicellular organisms, crustaceans, benthic invertebrates, bivalves, and fish. In the final analysis, the in-situ approach could be considered as an exposure system where any clinical measurements are possible. The most powerful approach would be to use the same species in laboratory and field experiments with the same endpoints.« less
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
NASA Technical Reports Server (NTRS)
Larkin, Paul; Goldstein, Bob
2008-01-01
This paper presents an update to the methods and procedures used in Direct Field Acoustic Testing (DFAT). The paper will discuss some of the recent techniques and developments that are currently being used and the future publication of a reference standard. Acoustic testing using commercial sound system components is becoming a popular and cost effective way of generating a required acoustic test environment both in and out of a reverberant chamber. This paper will present the DFAT test method, the usual setup and procedure and the development and use of a closed-loop, narrow-band control system. Narrow-band control of the acoustic PSD allows all standard techniques and procedures currently used in random control to be applied to acoustics and some examples are given. The paper will conclude with a summary of the development of a standard practice guideline that is hoped to be available in the first quarter of next year.
Monajjemzadeh, Farnaz; Shokri, Javad; Mohajel Nayebi, Ali Reza; Nemati, Mahboob; Azarmi, Yadollah; Charkhpour, Mohammad; Najafi, Moslem
2014-01-01
Purpose: This study was aimed to design Objective Structured Field Examination (OSFE) and also standardize the course plan of community pharmacy clerkship at Pharmacy Faculty of Tabriz University of Medical Sciences (Iran). Methods: The study was composed of several stages including; evaluation of the old program, standardization and implementation of the new course plan, design and implementation of OSFE, and finally results evaluation. Results: Lack of a fair final assessment protocol and proper organized educating system in various fields of community pharmacy clerkship skills were assigned as the main weaknesses of the old program. Educational priorities were determined and student’s feedback was assessed to design the new curriculum consisting of sessions to fulfill a 60-hour training course. More than 70% of the students were satisfied and successfulness and efficiency of the new clerkship program was significantly greater than the old program (P<0.05). In addition, they believed that OSFE was a suitable testing method. Conclusion: The defined course plan was successfully improved different skills of the students and OSFE was concluded as a proper performance based assessment method. This is easily adoptable by pharmacy faculties to improve the educational outcomes of the clerkship course. PMID:24511477
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwok, A.G.
This paper examines the comfort criteria of ANSI/ASHRAE Standard 55-1992 for their applicability in tropical classrooms. A field study conducted in Hawaii used a variety of methods to collect the data: survey questionnaires, physical measurements, interviews, and behavioral observations. A total of 3,544 students and teachers completed questionnaires in 29 naturally ventilated and air-conditioned classrooms in six schools during two seasons. The majority of classrooms failed to meet the physical specifications of the Standard 55 comfort zone. Thermal neutrality, preference, and acceptability results are compared with other field studies and the Standard 55 criteria. Acceptability votes by occupants of bothmore » naturally ventilated and air-conditioned classrooms exceeded the standard`s 80% acceptability criteria, regardless of whether physical conditions were in or out of the comfort zone. Responses from these two school populations suggest not only a basis for separate comfort standards but energy conservation opportunities through raising thermostat set points.« less
Standards for plant synthetic biology: a common syntax for exchange of DNA parts.
Patron, Nicola J; Orzaez, Diego; Marillonnet, Sylvestre; Warzecha, Heribert; Matthewman, Colette; Youles, Mark; Raitskin, Oleg; Leveau, Aymeric; Farré, Gemma; Rogers, Christian; Smith, Alison; Hibberd, Julian; Webb, Alex A R; Locke, James; Schornack, Sebastian; Ajioka, Jim; Baulcombe, David C; Zipfel, Cyril; Kamoun, Sophien; Jones, Jonathan D G; Kuhn, Hannah; Robatzek, Silke; Van Esse, H Peter; Sanders, Dale; Oldroyd, Giles; Martin, Cathie; Field, Rob; O'Connor, Sarah; Fox, Samantha; Wulff, Brande; Miller, Ben; Breakspear, Andy; Radhakrishnan, Guru; Delaux, Pierre-Marc; Loqué, Dominique; Granell, Antonio; Tissier, Alain; Shih, Patrick; Brutnell, Thomas P; Quick, W Paul; Rischer, Heiko; Fraser, Paul D; Aharoni, Asaph; Raines, Christine; South, Paul F; Ané, Jean-Michel; Hamberger, Björn R; Langdale, Jane; Stougaard, Jens; Bouwmeester, Harro; Udvardi, Michael; Murray, James A H; Ntoukakis, Vardis; Schäfer, Patrick; Denby, Katherine; Edwards, Keith J; Osbourn, Anne; Haseloff, Jim
2015-10-01
Inventors in the field of mechanical and electronic engineering can access multitudes of components and, thanks to standardization, parts from different manufacturers can be used in combination with each other. The introduction of BioBrick standards for the assembly of characterized DNA sequences was a landmark in microbial engineering, shaping the field of synthetic biology. Here, we describe a standard for Type IIS restriction endonuclease-mediated assembly, defining a common syntax of 12 fusion sites to enable the facile assembly of eukaryotic transcriptional units. This standard has been developed and agreed by representatives and leaders of the international plant science and synthetic biology communities, including inventors, developers and adopters of Type IIS cloning methods. Our vision is of an extensive catalogue of standardized, characterized DNA parts that will accelerate plant bioengineering. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Peng, Henry T; Savage, Erin; Vartanian, Oshin; Smith, Shane; Rhind, Shawn G; Tenn, Catherine; Bjamason, Stephen
2016-05-01
A convenient biosensor for real-time measurement of biomarkers for in-field psychophysiological stress research and military operations is desirable. We evaluated a hand-held device for measuring salivary amylase as a stress marker in medical technicians undergoing combat casualty care training using two different modalities in operating room and field settings. Salivary amylase activity was measured by two biosensor methods: directly sampling saliva with a test strip placed under the tongue or pipetting a fixed volume of precollected saliva onto the test strip, followed by analyzing the sample on the strip using a biosensor. The two methods were compared for their accuracy and sensitivity to detect the stress response using an enzyme assay method as a standard. The measurements from the under-the-tongue method were not as consistent with those from the standard assay method as the values obtained from the pipetting method. The under-the-tongue method did not detect any significant increase in the amylase activity due to stress in the operating room (P > 0.1), in contrast to the significant increases observed using the pipetting method and assay method with a significance level less than 0.05 and 0.1, respectively. Furthermore, the under-the-tongue method showed no increased amylase activity in the field testing, while both the pipetting method and assay method showed increased amylase activity in the same group (P < 0.1). The accuracy and consistency of the biosensors need to be improved when used to directly measure salivary amylase activity under the tongue for stress assessment in military medical training. © 2015 Her Majesty the Queen in Right of Canada. Journal of Clinical Laboratory Analysis published by Wiley Periodicals, Inc. Reproduced with the permission DRDC Editorial Board.
Information form the previously approved extended abstract A standardized area source measurement method based on mobile tracer correlation was used for methane emissions assessment in 52 field deployments...
Evaluation of Field-deployed Low Cost PM Sensors
Background Particulate matter (PM) is a pollutant of high public interest regulated by national ambient air quality standards (NAAQS) using federal reference method (FRM) and federal equivalent method (FEM) instrumentation identified for environmental monitoring. PM is present i...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stow, Sarah M.; Causon, Tim J.; Zheng, Xueyun
Collision cross section (CCS) measurements resulting from ion mobility-mass spectrometry (IM-MS) experiments provide a promising orthogonal dimension of structural information in MS-based analytical separations. As with any molecular identifier, interlaboratory standardization must precede broad range integration into analytical workflows. In this study, we present a reference drift tube ion mobility mass spectrometer (DTIM-MS) where improvements on the measurement accuracy of experimental parameters influencing IM separations provide standardized drift tube, nitrogen CCS values (DTCCSN2) for over 120 unique ion species with the lowest measurement uncertainty to date. The reproducibility of these DTCCSN2 values are evaluated across three additional laboratories on amore » commercially available DTIM-MS instrument. The traditional stepped field CCS method performs with a relative standard deviation (RSD) of 0.29% for all ion species across the three additional laboratories. The calibrated single field CCS method, which is compatible with a wide range of chromatographic inlet systems, performs with an average, absolute bias of 0.54% to the standardized stepped field DTCCSN2 values on the reference system. The low RSD and biases observed in this interlaboratory study illustrate the potential of DTIM-MS for providing a molecular identifier for a broad range of discovery based analyses.« less
49 CFR 325.25 - Calibration of measurement systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Standard Institute Standard Methods for Measurements of Sound Pressure Levels (ANSI S1.13-1971) for field... sound level measurement system must be calibrated and appropriately adjusted at one or more frequencies... 5-15 minutes thereafter, until it has been determined that the sound level measurement system has...
49 CFR 325.25 - Calibration of measurement systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Standard Institute Standard Methods for Measurements of Sound Pressure Levels (ANSI S1.13-1971) for field... sound level measurement system must be calibrated and appropriately adjusted at one or more frequencies... 5-15 minutes thereafter, until it has been determined that the sound level measurement system has...
77 FR 20217 - Secondary National Ambient Air Quality Standards for Oxides of Nitrogen and Sulfur
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-03
... Planning and Standards (OAQPS), U.S. Environmental Protection Agency, Mail Code C504-06, Research Triangle... of Research 3. Implementation Challenges 4. Monitoring Plan Development and Stakeholder Participation B. Summary of Proposed Evaluation of Monitoring Methods C. Comments on Field Pilot Program and...
Magnetostriction measurement by four probe method
NASA Astrophysics Data System (ADS)
Dange, S. N.; Radha, S.
2018-04-01
The present paper describes the design and setting up of an indigenouslydevelopedmagnetostriction(MS) measurement setup using four probe method atroom temperature.A standard strain gauge is pasted with a special glue on the sample and its change in resistance with applied magnetic field is measured using KeithleyNanovoltmeter and Current source. An electromagnet with field upto 1.2 tesla is used to source the magnetic field. The sample is placed between the magnet poles using self designed and developed wooden probe stand, capable of moving in three mutually perpendicular directions. The nanovoltmeter and current source are interfaced with PC using RS232 serial interface. A software has been developed in for logging and processing of data. Proper optimization of measurement has been done through software to reduce the noise due to thermal emf and electromagnetic induction. The data acquired for some standard magnetic samples are presented. The sensitivity of the setup is 1microstrain with an error in measurement upto 5%.
Work toward a standardized version of a mobile tracer correlation measurement method is discussed. The method used for assessment of methane emissions from 15 landfills in 56 field deployments from 2009 to 2013. This general area source measurement method uses advances in instrum...
[Biogeography: geography or biology?].
Kafanov, A I
2009-01-01
General biogeography is an interdisciplinary science, which combines geographic and biological aspects constituting two distinct research fields: biological geography and geographic biology. These fields differ in the nature of their objects of study, employ different methods and represent Earth sciences and biological sciences, respectively. It is suggested therefore that the classification codes for research fields and the state professional education standard should be revised.
Determination of wind from NIMBUS 6 satellite sounding data
NASA Technical Reports Server (NTRS)
Carle, W. E.; Scoggins, J. R.
1981-01-01
Objective methods of computing upper level and surface wind fields from NIMBUS 6 satellite sounding data are developed. These methods are evaluated by comparing satellite derived and rawinsonde wind fields on gridded constant pressure charts in four geographical regions. Satellite-derived and hourly observed surface wind fields are compared. Results indicate that the best satellite-derived wind on constant pressure charts is a geostrophic wind derived from highly smoothed fields of geopotential height. Satellite-derived winds computed in this manner and rawinsonde winds show similar circulation patterns except in areas of small height gradients. Magnitudes of the standard deviation of the differences between satellite derived and rawinsonde wind speeds range from approximately 3 to 12 m/sec on constant pressure charts and peak at the jet stream level. Fields of satellite-derived surface wind computed with the logarithmic wind law agree well with fields of observed surface wind in most regions. Magnitudes of the standard deviation of the differences in surface wind speed range from approximately 2 to 4 m/sec, and satellite derived surface winds are able to depict flow across a cold front and around a low pressure center.
A loop-gap resonator for chirality-sensitive nuclear magneto-electric resonance (NMER)
NASA Astrophysics Data System (ADS)
Garbacz, Piotr; Fischer, Peer; Krämer, Steffen
2016-09-01
Direct detection of molecular chirality is practically impossible by methods of standard nuclear magnetic resonance (NMR) that is based on interactions involving magnetic-dipole and magnetic-field operators. However, theoretical studies provide a possible direct probe of chirality by exploiting an enantiomer selective additional coupling involving magnetic-dipole, magnetic-field, and electric field operators. This offers a way for direct experimental detection of chirality by nuclear magneto-electric resonance (NMER). This method uses both resonant magnetic and electric radiofrequency (RF) fields. The weakness of the chiral interaction though requires a large electric RF field and a small transverse RF magnetic field over the sample volume, which is a non-trivial constraint. In this study, we present a detailed study of the NMER concept and a possible experimental realization based on a loop-gap resonator. For this original device, the basic principle and numerical studies as well as fabrication and measurements of the frequency dependence of the scattering parameter are reported. By simulating the NMER spin dynamics for our device and taking the 19F NMER signal of enantiomer-pure 1,1,1-trifluoropropan-2-ol, we predict a chirality induced NMER signal that accounts for 1%-5% of the standard achiral NMR signal.
Effective-field renormalization-group method for Ising systems
NASA Astrophysics Data System (ADS)
Fittipaldi, I. P.; De Albuquerque, D. F.
1992-02-01
A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.
Bringing Standardized Processes in Atom-Probe Tomography: I Establishing Standardized Terminology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ian M; Danoix, F; Forbes, Richard
2011-01-01
Defining standardized methods requires careful consideration of the entire field and its applications. The International Field Emission Society (IFES) has elected a Standards Committee, whose task is to determine the needed steps to establish atom-probe tomography as an accepted metrology technique. Specific tasks include developing protocols or standards for: terminology and nomenclature; metrology and instrumentation, including specifications for reference materials; test methodologies; modeling and simulations; and science-based health, safety, and environmental practices. The Committee is currently working on defining terminology related to atom-probe tomography with the goal to include terms into a document published by the International Organization for Standardsmore » (ISO). A lot of terms also used in other disciplines have already been defined) and will be discussed for adoption in the context of atom-probe tomography.« less
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
Abstract - A standardized version of a mobile tracer correlation measurement method was developed and used for assessment of methane emissions from 15 landfills in 56 field deployments from 2009 to 2013. This general area source measurement method uses advances in instrumentation...
Performance evaluation of infrared imaging system in field test
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie
2014-11-01
Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.
Drinking water test methods in crisis-afflicted areas: comparison of methods under field conditions.
Merle, Roswitha; Bleul, Ingo; Schulenburg, Jörg; Kreienbrock, Lothar; Klein, Günter
2011-11-01
To simplify the testing of drinking water in crisis-afflicted areas (as in Kosovo in 2007), rapid test methods were compared with the standard test. For Escherichia coli and coliform pathogens, rapid tests were made available: Colilert(®)-18, P/A test with 4-methylumbelliferyl-β-D-glucoronid, and m-Endo Broth. Biochemical differentiation was carried out by Enterotube™ II. Enterococci were determined following the standard ISO test and by means of Enterolert™. Four hundred ninety-nine water samples were tested for E. coli and coliforms using four methods. Following the standard method, 20.8% (n=104) of the samples contained E. coli, whereas the rapid tests detected between 19.6% (m-Endo Broth, 92.0% concordance) and 20.0% (concordance: 93.6% Colilert-18 and 94.8% P/A-test) positive samples. Regarding coliforms, the percentage of concordant results ranged from 98.4% (P/A-test) to 99.0% (Colilert-18). Colilert-18 and m-Endo Broth detected even more positive samples than the standard method did. Enterococci were detected in 93 of 573 samples by the standard method, but in 92 samples by Enterolert (concordance: 99.5%). Considering the high-quality equipment and time requirements of the standard method, the use of rapid tests in crisis-afflicted areas is sufficiently reliable.
Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.
2017-01-01
The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864
NASA Astrophysics Data System (ADS)
Lye, Peter G.; Bradbury, Ronald; Lamb, David W.
Silica optical fibres were used to measure colour (mg anthocyanin/g fresh berry weight) in samples of red wine grape homogenates via optical Fibre Evanescent Field Absorbance (FEFA). Colour measurements from 126 samples of grape homogenate were compared against the standard industry spectrophotometric reference method that involves chemical extraction and subsequent optical absorption measurements of clarified samples at 520 nm. FEFA absorbance on homogenates at 520 nm (FEFA520h) was correlated with the industry reference method measurements of colour (R2 = 0.46, n = 126). Using a simple regression equation colour could be predicted with a standard error of cross-validation (SECV) of 0.21 mg/g, with a range of 0.6 to 2.2 mg anthocyanin/g and a standard deviation of 0.33 mg/g. With a Ratio of Performance Deviation (RPD) of 1.6, the technique when utilizing only a single detection wavelength, is not robust enough to apply in a diagnostic sense, however the results do demonstrate the potential of the FEFA method as a fast and low-cost assay of colour in homogenized samples.
Park, SangWook; Kim, Minhyuk
2016-01-01
In this paper, a numerical exposure assessment method is presented for a quasi-static analysis by the use of finite-difference time-domain (FDTD) algorithm. The proposed method is composed of scattered field FDTD method and quasi-static approximation for analyzing of the low frequency band electromagnetic problems. The proposed method provides an effective tool to compute induced electric fields in an anatomically realistic human voxel model exposed to an arbitrary non-uniform field source in the low frequency ranges. The method is verified, and excellent agreement with theoretical solutions is found for a dielectric sphere model exposed to a magnetic dipole source. The assessment method serves a practical example of the electric fields, current densities, and specific absorption rates induced in a human head and body in close proximity to a 150-kHz wireless power transfer system for cell phone charging. The results are compared to the limits recommended by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the IEEE standard guidelines.
Kim, Minhyuk
2016-01-01
In this paper, a numerical exposure assessment method is presented for a quasi-static analysis by the use of finite-difference time-domain (FDTD) algorithm. The proposed method is composed of scattered field FDTD method and quasi-static approximation for analyzing of the low frequency band electromagnetic problems. The proposed method provides an effective tool to compute induced electric fields in an anatomically realistic human voxel model exposed to an arbitrary non-uniform field source in the low frequency ranges. The method is verified, and excellent agreement with theoretical solutions is found for a dielectric sphere model exposed to a magnetic dipole source. The assessment method serves a practical example of the electric fields, current densities, and specific absorption rates induced in a human head and body in close proximity to a 150-kHz wireless power transfer system for cell phone charging. The results are compared to the limits recommended by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the IEEE standard guidelines. PMID:27898688
Developing the Precision Magnetic Field for the E989 Muon g{2 Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Matthias W.
The experimental value ofmore » $$(g\\hbox{--}2)_\\mu$$ historically has been and contemporarily remains an important probe into the Standard Model and proposed extensions. Previous measurements of $$(g\\hbox{--}2)_\\mu$$ exhibit a persistent statistical tension with calculations using the Standard Model implying that the theory may be incomplete and constraining possible extensions. The Fermilab Muon g-2 experiment, E989, endeavors to increase the precision over previous experiments by a factor of four and probe more deeply into the tension with the Standard Model. The $$(g\\hbox{--}2)_\\mu$$ experimental implementation measures two spin precession frequencies defined by the magnetic field, proton precession and muon precession. The value of $$(g\\hbox{--}2)_\\mu$$ is derived from a relationship between the two frequencies. The precision of magnetic field measurements and the overall magnetic field uniformity achieved over the muon storage volume are then two undeniably important aspects of the e xperiment in minimizing uncertainty. The current thesis details the methods employed to achieve magnetic field goals and results of the effort.« less
A Standardized Mean Difference Effect Size for Single Case Designs
ERIC Educational Resources Information Center
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R.
2012-01-01
Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on…
A Knowledge Engineering Approach to Develop Domain Ontology
ERIC Educational Resources Information Center
Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji
2011-01-01
Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…
Preparing Teachers for New Standards: From Content in Core Disciplines to Disciplinary Practices
ERIC Educational Resources Information Center
Boyle, Justin D.; Svihla, Vanessa; Tyson, Kersti; Bowers, Hannah; Buntjer, Jennifer; Garcia-Olp, Michelle; Kvam, Nicholas; Sample, Stephanie
2013-01-01
There are many barriers to the implementation of new practice standards. To implement practices that both prepare and inspire their students, preservice teachers need opportunities to enact reform practices: to prepare and be inspired themselves. These opportunities are found in students' content courses, methods courses, and field placements. In…
Laurin, E; Thakur, K K; Gardner, I A; Hick, P; Moody, N J G; Crane, M S J; Ernst, I
2018-05-01
Design and reporting quality of diagnostic accuracy studies (DAS) are important metrics for assessing utility of tests used in animal and human health. Following standards for designing DAS will assist in appropriate test selection for specific testing purposes and minimize the risk of reporting biased sensitivity and specificity estimates. To examine the benefits of recommending standards, design information from published DAS literature was assessed for 10 finfish, seven mollusc, nine crustacean and two amphibian diseases listed in the 2017 OIE Manual of Diagnostic Tests for Aquatic Animals. Of the 56 DAS identified, 41 were based on field testing, eight on experimental challenge studies and seven on both. Also, we adapted human and terrestrial-animal standards and guidelines for DAS structure for use in aquatic animal diagnostic research. Through this process, we identified and addressed important metrics for consideration at the design phase: study purpose, targeted disease state, selection of appropriate samples and specimens, laboratory analytical methods, statistical methods and data interpretation. These recommended design standards for DAS are presented as a checklist including risk-of-failure points and actions to mitigate bias at each critical step. Adherence to standards when designing DAS will also facilitate future systematic review and meta-analyses of DAS research literature. © 2018 John Wiley & Sons Ltd.
[Modified Delphi method in the constitution of school sanitation standard].
Yin, Xunqiang; Liang, Ying; Tan, Hongzhuan; Gong, Wenjie; Deng, Jing; Luo, Jiayou; Di, Xiaokang; Wu, Yue
2012-11-01
To constitute school sanitation standard using modified Delphi method, and to explore the feasibility and the predominance of Delphi method in the constitution of school sanitation standard. Two rounds of expert consultations were adopted in this study. The data were analyzed with SPSS15.0 to screen indices of school sanitation standard. Thirty-two experts accomplished the 2 rounds of consultations. The average length of expert service was (24.69 ±8.53) years. The authority coefficient was 0.729 ±0.172. The expert positive coefficient was 94.12% (32/34) in the first round and 100% (32/32) in the second round. The harmonious coefficients of importance, feasibility and rationality in the second round were 0.493 (P<0.05), 0.527 (P<0.01), and 0.535 (P<0.01), respectively, suggesting unanimous expert opinions. According to the second round of consultation, 38 indices were included in the framework. Theoretical analysis, literature review, investigation and so on are generally used in health standard constitution currently. Delphi method is a rapid, effective and feasible method in this field.
NASA Technical Reports Server (NTRS)
Hill, Charles S.; Oliveras, Ovidio M.
2011-01-01
Evolution of the 3D strain field during ASTM-D-7078 v-notch rail shear tests on 8-ply quasi-isotropic carbon fiber/epoxy laminates was determined by optical photogrammetry using an ARAMIS system. Specimens having non-optimal geometry and minor discrepancies in dimensional tolerances were shown to display non-symmetry and/or stress concentration in the vicinity of the notch relative to a specimen meeting the requirements of the standard, but resulting shear strength and modulus values remained within acceptable bounds of standard deviation. Based on these results, and reported difficulty machining specimens to the required tolerances using available methods, it is suggested that a parametric study combining analytical methods and experiment may provide rationale to increase the tolerances on some specimen dimensions, reducing machining costs, increasing the proportion of acceptable results, and enabling a wider adoption of the test method.
Assessment of bifacial photovoltaic module power rating methodologies–inside and out
Deline, Chris; MacAlpine, Sara; Marion, Bill; ...
2017-01-26
One-sun power ratings for bifacial modules are currently undefined. This is partly because there is no standard definition of rear irradiance given 1000 W·m -2 on the front. Using field measurements and simulations, we evaluate multiple deployment scenarios for bifacial modules and provide details on the amount of irradiance that could be expected. A simplified case that represents a single module deployed under conditions consistent with existing one-sun irradiance standards lead to a bifacial reference condition of 1000 W·m -2 G front and 130-140 W·m -2 G rear. For fielded systems of bifacial modules, Grear magnitude and spatial uniformity willmore » be affected by self-shade from adjacent modules, varied ground cover, and ground-clearance height. A standard measurement procedure for bifacial modules is also currently undefined. A proposed international standard is under development, which provides the motivation for this paper. Here, we compare field measurements of bifacial modules under natural illumination with proposed indoor test methods, where irradiance is only applied to one side at a time. The indoor method has multiple advantages, including controlled and repeatable irradiance and thermal environment, along with allowing the use of conventional single-sided flash test equipment. The comparison results are promising, showing that indoor and outdoor methods agree within 1%-2% for multiple rear-irradiance conditions and bifacial module construction. Furthermore, a comparison with single-diode theory also shows good agreement to indoor measurements, within 1%-2% for power and other current-voltage curve parameters.« less
Vialaret, Jérôme; Picas, Alexia; Delaby, Constance; Bros, Pauline; Lehmann, Sylvain; Hirtz, Christophe
2018-06-01
Hepcidin-25 peptide is a biomarker which is known to have considerable clinical potential for diagnosing iron-related diseases. Developing analytical methods for the absolute quantification of hepcidin is still a real challenge, however, due to the sensitivity, specificity and reproducibility issues involved. In this study, we compare and discuss two MS-based assays for quantifying hepcidin, which differ only in terms of the type of liquid chromatography (nano LC/MS versus standard LC/MS) involved. The same sample preparation, the same internal standards and the same MS analyzer were used with both approaches. In the field of proteomics, nano LC chromatography is generally known to be more sensitive and less robust than standard LC methods. In this study, we established that the performances of the standard LC method are equivalent to those of our previously developed nano LC method. Although the analytical performances were very similar in both cases. The standard-flow platform therefore provides the more suitable alternative for accurately determining hepcidin in clinical settings. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gabai, Haniel; Baranes-Zeevi, Maya; Zilberman, Meital; Shaked, Natan T.
2013-04-01
We propose an off-axis interferometric imaging system as a simple and unique modality for continuous, non-contact and non-invasive wide-field imaging and characterization of drug release from its polymeric device used in biomedicine. In contrast to the current gold-standard methods in this field, usually based on chromatographic and spectroscopic techniques, our method requires no user intervention during the experiment, and only one test-tube is prepared. We experimentally demonstrate imaging and characterization of drug release from soy-based protein matrix, used as skin equivalent for wound dressing with controlled anesthetic, Bupivacaine drug release. Our preliminary results demonstrate the high potential of our method as a simple and low-cost modality for wide-field imaging and characterization of drug release from drug delivery devices.
Percy, Andrew J; Mohammed, Yassene; Yang, Juncong; Borchers, Christoph H
2015-12-01
An increasingly popular mass spectrometry-based quantitative approach for health-related research in the biomedical field involves the use of stable isotope-labeled standards (SIS) and multiple/selected reaction monitoring (MRM/SRM). To improve inter-laboratory precision and enable more widespread use of this 'absolute' quantitative technique in disease-biomarker assessment studies, methods must be standardized. Results/methodology: Using this MRM-with-SIS-peptide approach, we developed an automated method (encompassing sample preparation, processing and analysis) for quantifying 76 candidate protein markers (spanning >4 orders of magnitude in concentration) in neat human plasma. The assembled biomarker assessment kit - the 'BAK-76' - contains the essential materials (SIS mixes), methods (for acquisition and analysis), and tools (Qualis-SIS software) for performing biomarker discovery or verification studies in a rapid and standardized manner.
Space Technology 5 Multi-point Measurements of Near-Earth Magnetic Fields: Initial Results
NASA Technical Reports Server (NTRS)
Slavin, James A.; Le, G.; Strangeway, R. L.; Wang, Y.; Boardsen, S.A.; Moldwin, M. B.; Spence, H. E.
2007-01-01
The Space Technology 5 (ST-5) mission successfully placed three micro-satellites in a 300 x 4500 km dawn-dusk orbit on 22 March 2006. Each spacecraft carried a boom-mounted vector fluxgate magnetometer that returned highly sensitive and accurate measurements of the geomagnetic field. These data allow, for the first time, the separation of temporal and spatial variations in field-aligned current (FAC) perturbations measured in low-Earth orbit on time scales of approximately 10 sec to 10 min. The constellation measurements are used to directly determine field-aligned current sheet motion, thickness and current density. In doing so, we demonstrate two multi-point methods for the inference of FAC current density that have not previously been possible in low-Earth orbit; 1) the "standard method," based upon s/c velocity, but corrected for FAC current sheet motion, and 2) the "gradiometer method" which uses simultaneous magnetic field measurements at two points with known separation. Future studies will apply these methods to the entire ST-5 data set and expand to include geomagnetic field gradient analyses as well as field-aligned and ionospheric currents.
Integration of IEEE 1451 and HL7 exchanging information for patients' sensor data.
Kim, Wooshik; Lim, Suyoung; Ahn, Jinsoo; Nah, Jiyoung; Kim, Namhyun
2010-12-01
HL7 (Health Level 7) is a standard developed for exchanging incompatible healthcare information generated from programs or devices among heterogenous medical information systems. At present, HL7 is growing as a global standard. However, the HL7 standard does not support effective methods for treating data from various medical sensors, especially from mobile sensors. As ubiquitous systems are growing, HL7 must communicate with various medical transducers. In the area of sensor fields, IEEE 1451 is a group of standards for controlling transducers and for communicating data from/to various transducers. In this paper, we present the possibility of interoperability between the two standards, i.e., HL7 and IEEE 1451. After we present a method to integrate them and show the preliminary results of this approach.
This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for disso...
A simple, accurate, field-portable mixing ratio generator and Rayleigh distillation device
USDA-ARS?s Scientific Manuscript database
Routine field calibration of water vapor analyzers has always been a challenging problem for those making long-term flux measurements at remote sites. Automated sampling of standard gases from compressed tanks, the method of choice for CO2 calibration, cannot be used for H2O. Calibrations are typica...
On the `simple' form of the gravitational action and the self-interacting graviton
NASA Astrophysics Data System (ADS)
Tomboulis, E. T.
2017-09-01
The so-called ΓΓ-form of the gravitational Lagrangian, long known to provide its most compact expression as well as the most efficient generation of the graviton vertices, is taken as the starting point for discussing General Relativity as a theory of the self-interacting graviton. A straightforward but general method of converting to a covariant formulation by the introduction of a reference metric is given. It is used to recast the Einstein field equation as the equation of motion of a spin-2 particle interacting with the canonical energy-momentum tensor symmetrized by the standard Belinfante method applicable to any field carrying nonzero spin. This represents the graviton field equation in a form complying with the precepts of standard field theory. It is then shown how representations based on other, at face value completely unrelated definitions of energy-momentum (pseudo)tensors are all related by the addition of appropriate superpotential terms. Specifically, the superpotentials are explicitly constructed which connect to: i) the common definition consisting simply of the nonlinear part of the Einstein tensor; ii) the Landau-Lifshitz definition.
Phased-array vector velocity estimation using transverse oscillations.
Pihl, Michael J; Marcher, Jonne; Jensen, Jorgen A
2012-12-01
A method for estimating the 2-D vector velocity of blood using a phased-array transducer is presented. The approach is based on the transverse oscillation (TO) method. The purposes of this work are to expand the TO method to a phased-array geometry and to broaden the potential clinical applicability of the method. A phased-array transducer has a smaller footprint and a larger field of view than a linear array, and is therefore more suited for, e.g., cardiac imaging. The method relies on suitable TO fields, and a beamforming strategy employing diverging TO beams is proposed. The implementation of the TO method using a phased-array transducer for vector velocity estimation is evaluated through simulation and flow-rig measurements are acquired using an experimental scanner. The vast number of calculations needed to perform flow simulations makes the optimization of the TO fields a cumbersome process. Therefore, three performance metrics are proposed. They are calculated based on the complex TO spectrum of the combined TO fields. It is hypothesized that the performance metrics are related to the performance of the velocity estimates. The simulations show that the squared correlation values range from 0.79 to 0.92, indicating a correlation between the performance metrics of the TO spectrum and the velocity estimates. Because these performance metrics are much more readily computed, the TO fields can be optimized faster for improved velocity estimation of both simulations and measurements. For simulations of a parabolic flow at a depth of 10 cm, a relative (to the peak velocity) bias and standard deviation of 4% and 8%, respectively, are obtained. Overall, the simulations show that the TO method implemented on a phased-array transducer is robust with relative standard deviations around 10% in most cases. The flow-rig measurements show similar results. At a depth of 9.5 cm using 32 emissions per estimate, the relative standard deviation is 9% and the relative bias is -9%. At the center of the vessel, the velocity magnitude is estimated to be 0.25 ± 0.023 m/s, compared with an expected peak velocity magnitude of 0.25 m/s, and the beam-to-flow angle is calculated to be 89.3° ± 0.77°, compared with an expected angle value between 89° and 90°. For steering angles up to ±20° degrees, the relative standard deviation is less than 20%. The results also show that a 64-element transducer implementation is feasible, but with a poorer performance compared with a 128-element transducer. The simulation and experimental results demonstrate that the TO method is suitable for use in conjunction with a phased-array transducer, and that 2-D vector velocity estimation is possible down to a depth of 15 cm.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.
2013-08-01
A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.
SU-E-T-804: Verification of the BJR-25 Method of KQ Determination for CyberKnife Absolute Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Spectrum Medical Physics, LLC - Greenville, SC; Willett, B
2015-06-15
Purpose: Absolute calibration of the CyberKnife is performed using a 6cm-diameter cone defined at 80cm SAD. Since kQ is defined using PDD values determined using 10×10 cm fields at 100cm SSD, the PDD must be corrected in order to correctly apply the quality conversion factor. The accepted method is based on equivalent field-size conversions of PDD values using BJR25. Using the new InCise MLC system, the CK is capable of generating a rectangular field equivalent to 10×10 cm square field. In this study, a comparison is made between kQ values determined using the traditional BJR25 method and the MLC methodmore » introduced herein. Methods: First, kQ(BJR) is determined: a PDD is acquired using a 6cm circular field at 100cm SSD, its field size converted to an equivalent square, and PDD converted to a 10×10cm field using the appropriate BJR25 table. Maintaining a consistent setup, the collimator is changed, and the MLC method is used. Finally, kQ is determined using PDDs acquired with a 9.71×10.31cm at 100cm SSD. This field is produced by setting the field to a size of 7.77×8.25cm (since it is defined at 80cm SAD). An exact 10×10cm field since field size is relegated to increments of its leaf width (0.25cm). This comparison is made using an Exradin A1SL, IBA CC08, IBA CC13, and an Exradin A19. For each detector and collimator type, the beam injector was adjusted to give 5 different beam qualities; representing a range of clinical systems. Results: Averaging across all beam qualities, kQ(MLC) differed from kQ(BJR) by less than 0.15%. The difference between the values increased with detector volume. Conclusion: For CK users with standard cone collimators, the BJR25 method has been verified. For CK users the MLC system, a technique is described to determine kQ. Primary author is the President/Owner of Spectrum Medical Physics, LLC, a company which maintains contracts with Siemens Healthcare and Standard Imaging, Inc.« less
Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán
2018-04-05
Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.
NASA Astrophysics Data System (ADS)
Hoefen, T. M.; Kokaly, R. F.; Swayze, G. A.; Livo, K. E.
2015-12-01
Collection of spectroscopic data has expanded with the development of field-portable spectrometers. The most commonly available spectrometers span one or several wavelength ranges: the visible (VIS) and near-infrared (NIR) region from approximately 400 to 1000 nm, and the shortwave infrared (SWIR) region from approximately 1000-2500 nm. Basic characteristics of spectrometer performance are the wavelength position and bandpass of each channel. Bandpass can vary across the wavelength coverage of an instrument, due to spectrometer design and detector materials. Spectrometer specifications can differ from one instrument to the next for a given model and between manufacturers. The USGS Spectroscopy Lab in Denver has developed a simple method to evaluate field spectrometer wavelength accuracy and bandpass values using transmission measurements of materials with intense, narrow absorption features, including Mylar* plastic, praseodymium-doped glass, and National Institute of Standards and Technology Standard Reference Material 2035. The evaluation procedure has been applied in laboratory and field settings for 19 years and used to detect deviations from cited manufacturer specifications. Tracking of USGS spectrometers with transmission standards has revealed several instances of wavelength shifts due to wear in spectrometer components. Since shifts in channel wavelengths and differences in bandpass between instruments can impact the use of field spectrometer data to calibrate and analyze imaging spectrometer data, field protocols to measure wavelength standards can limit data loss due to spectrometer degradation. In this paper, the evaluation procedure will be described and examples of observed wavelength shifts during a spectrometer field season will be presented. The impact of changing wavelength and bandpass characteristics on spectral measurements will be demonstrated and implications for spectral libraries will be discussed. *Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Standards for dielectric elastomer transducers
NASA Astrophysics Data System (ADS)
Carpi, Federico; Anderson, Iain; Bauer, Siegfried; Frediani, Gabriele; Gallone, Giuseppe; Gei, Massimiliano; Graaf, Christian; Jean-Mistral, Claire; Kaal, William; Kofod, Guggi; Kollosche, Matthias; Kornbluh, Roy; Lassen, Benny; Matysek, Marc; Michel, Silvain; Nowak, Stephan; O'Brien, Benjamin; Pei, Qibing; Pelrine, Ron; Rechenbach, Björn; Rosset, Samuel; Shea, Herbert
2015-10-01
Dielectric elastomer transducers consist of thin electrically insulating elastomeric membranes coated on both sides with compliant electrodes. They are a promising electromechanically active polymer technology that may be used for actuators, strain sensors, and electrical generators that harvest mechanical energy. The rapid development of this field calls for the first standards, collecting guidelines on how to assess and compare the performance of materials and devices. This paper addresses this need, presenting standardized methods for material characterisation, device testing and performance measurement. These proposed standards are intended to have a general scope and a broad applicability to different material types and device configurations. Nevertheless, they also intentionally exclude some aspects where knowledge and/or consensus in the literature were deemed to be insufficient. This is a sign of a young and vital field, whose research development is expected to benefit from this effort towards standardisation.
Coordination and standardization of federal sedimentation activities
Glysson, G. Douglas; Gray, John R.
1997-01-01
- precipitation information critical to water resources management. Memorandum M-92-01 covers primarily freshwater bodies and includes activities, such as "development and distribution of consensus standards, field-data collection and laboratory analytical methods, data processing and interpretation, data-base management, quality control and quality assurance, and water- resources appraisals, assessments, and investigations." Research activities are not included.
ERIC Educational Resources Information Center
Gibbone, Anne; Mercier, Kevin
2014-01-01
Teacher candidates' use of technology is a component of physical education teacher education (PETE) program learning goals and accreditation standards. The methods presented in this article can help teacher candidates to learn about and apply technology as an instructional tool prior to and during field or clinical experiences. The goal in…
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
The US Environmental Protection Agency (EPA) published a National Ambient Air Quality Standard (NAAQS) and the accompanying Federal Reference Method (FRM) for PM10 in 1987. The EPA revised the particle standards and FRM in 1997 to include PM2.5. In 2005, EPA...
The repeatability of mean defect with size III and size V standard automated perimetry.
Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A
2013-02-15
The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.
NASA Astrophysics Data System (ADS)
Pan, Kok-Kwei
We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.
Broadband standard dipole antenna for antenna calibration
NASA Astrophysics Data System (ADS)
Koike, Kunimasa; Sugiura, Akira; Morikawa, Takao
1995-06-01
Antenna calibration of EMI antennas is mostly performed by the standard antenna method at an open-field test site using a specially designed dipole antenna as a reference. In order to develop broadband standard antennas, the antenna factors of shortened dipples are theoretically investigated. First, the effects of the dipole length are analyzed using the induced emf method. Then, baluns and loads are examined to determine their influence on the antenna factors. It is found that transformer-type baluns are very effective for improving the height dependence of the antenna factors. Resistive loads are also useful for flattening the frequency dependence. Based on these studies, a specification is developed for a broadband standard antenna operating in the 30 to 150 MHz frequency range.
Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T
1982-01-01
A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.
Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina
2017-01-01
A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Method employee used to purchase transportation tickets Method Indicator GTR U.S. Government Transportation Request Central Billing Account A contractor centrally billed account Government Charge Card In.../Date Fields Claimant Signature Traveler's signature, or digital representation. The signature signifies...
Background field Landau mode operators for the nucleon
NASA Astrophysics Data System (ADS)
Kamleh, Waseem; Bignell, Ryan; Leinweber, Derek B.; Burkardt, Matthias
2018-03-01
The introduction of a uniform background magnetic field breaks threedimensional spatial symmetry for a charged particle and introduces Landau mode effects. Standard quark operators are inefficient at isolating the nucleon correlation function at nontrivial field strengths. We introduce novel quark operators constructed from the twodimensional Laplacian eigenmodes that describe a charged particle on a finite lattice. These eigenmode-projected quark operators provide enhanced precision for calculating nucleon energy shifts in a magnetic field. Preliminary results are obtained for the neutron and proton magnetic polarisabilities using these methods.
Automated installation methods for photovoltaic arrays
NASA Astrophysics Data System (ADS)
Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.
1982-11-01
Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.
Testing strong-segregation theory against self-consistent-field theory for block copolymer melts
NASA Astrophysics Data System (ADS)
Matsen, M. W.
2001-06-01
We introduce a highly efficient self-consistent-field theory (SCFT) method for examining the cylindrical and spherical block copolymer morphologies using a standard unit cell approximation (UCA). The method is used to calculate the classical diblock copolymer phase boundaries deep into the strong-segregation regime, where they can be compared with recent improvements to strong-segregation theory (SST). The comparison suggests a significant discrepancy between the two theories indicating that our understanding of strongly stretched polymer brushes is still incomplete.
Role of Microstructure in High Temperature Oxidation.
1980-05-01
Surface Prepartion Upon Oxidation ......... .................. 20 EXPERIMENTAL METHODS 21 Speciemen Preparation...angle sectioning method 26 Figure 3. Application of the test line upon the image of NiO scale to determine the number of the NiO grain boundary...of knowledge in this field was readily accounted for by extreme experimental difficulty in applying standard methods of microscopy to the thin
Statistical Process Control Charts for Measuring and Monitoring Temporal Consistency of Ratings
ERIC Educational Resources Information Center
Omar, M. Hafidz
2010-01-01
Methods of statistical process control were briefly investigated in the field of educational measurement as early as 1999. However, only the use of a cumulative sum chart was explored. In this article other methods of statistical quality control are introduced and explored. In particular, methods in the form of Shewhart mean and standard deviation…
Marcotty, T; Billiouw, M; Chaka, G; Berkvens, D; Losson, B; Brandt, J
2001-08-20
Immunisation by the infection and treatment method using the Katete strain is currently the most efficient prophylactic technique to control East Coast fever (ECF) in the endemic areas of the Eastern Province of Zambia. The maintenance of the cold chain in liquid nitrogen up to the time of inoculation and the cost of the reference long-acting oxytetracycline (Terramycin LA, Pfizer) are the main drawbacks of the method. The work presented in this paper aims at reducing the cost of immunisation against ECF by using an ice bath for the field delivery and a cheaper long-acting oxytetracycline formulation as chemotherapeutic agent. In experimental conditions, the results from 40 calves immunised after various periods of storage on ice ranging from 4 to 32 h indicate that deferred immunisation performed with a stabilate kept on ice for up to 6h after thawing has an efficiency of 90%. Moreover, sporozoites kept on ice were still surviving 32 h after thawing. In a field trial, 91 calves were inoculated with a stabilate kept for 3.5-5.5 h after thawing and dilution whereas 86 calves were immunised using the standard method. Clinical and parasitological reactions to immunisation were monitored as well as the seroconversion. In the field trial, the deferred immunisation was more efficient than the standard method. The acid formulation of oxytetracycline that was tested was found as suitable as the reference alkaline formulation for the chemotherapeutic control of the Katete strain in ECF immunisation. One indoor trial was carried out on 10 animals and a field trial involved 93 calves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, Aron; Sengupta, Manajit; Andreas, Afshin
Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less
Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E
2014-06-01
Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.
Topography measurements and applications in ballistics and tool mark identifications*
Vorburger, T V; Song, J; Petraco, N
2016-01-01
The application of surface topography measurement methods to the field of firearm and toolmark analysis is fairly new. The field has been boosted by the development of a number of competing optical methods, which has improved the speed and accuracy of surface topography acquisitions. We describe here some of these measurement methods as well as several analytical methods for assessing similarities and differences among pairs of surfaces. We also provide a few examples of research results to identify cartridge cases originating from the same firearm or tool marks produced by the same tool. Physical standards and issues of traceability are also discussed. PMID:27182440
Untangling Autophagy Measurements: All Fluxed Up
Gottlieb, Roberta A.; Andres, Allen M.; Sin, Jon; Taylor, David
2015-01-01
Autophagy is an important physiological process in the heart, and alterations in autophagic activity can exacerbate or mitigate injury during various pathological processes. Methods to assess autophagy have changed rapidly as the field of research has expanded. As with any new field, methods and standards for data analysis and interpretation evolve as investigators acquire experience and insight. The purpose of this review is to summarize current methods to measure autophagy, selective mitochondrial autophagy (mitophagy), and autophagic flux. We will examine several published studies where confusion arose in in data interpretation, in order to illustrate the challenges. Finally we will discuss methods to assess autophagy in vivo and in patients. PMID:25634973
Single-scale renormalisation group improvement of multi-scale effective potentials
NASA Astrophysics Data System (ADS)
Chataignier, Leonardo; Prokopec, Tomislav; Schmidt, Michael G.; Świeżewska, Bogumiła
2018-03-01
We present a new method for renormalisation group improvement of the effective potential of a quantum field theory with an arbitrary number of scalar fields. The method amounts to solving the renormalisation group equation for the effective potential with the boundary conditions chosen on the hypersurface where quantum corrections vanish. This hypersurface is defined through a suitable choice of a field-dependent value for the renormalisation scale. The method can be applied to any order in perturbation theory and it is a generalisation of the standard procedure valid for the one-field case. In our method, however, the choice of the renormalisation scale does not eliminate individual logarithmic terms but rather the entire loop corrections to the effective potential. It allows us to evaluate the improved effective potential for arbitrary values of the scalar fields using the tree-level potential with running coupling constants as long as they remain perturbative. This opens the possibility of studying various applications which require an analysis of multi-field effective potentials across different energy scales. In particular, the issue of stability of the scalar potential can be easily studied beyond tree level.
Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams
NASA Technical Reports Server (NTRS)
Lipatov, Alexander S.
2011-01-01
We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.
NASA Astrophysics Data System (ADS)
Salmasi, Mahbod; Potter, Michael
2018-07-01
Maxwell's equations are discretized on a Face-Centered Cubic (FCC) lattice instead of a simple cubic as an alternative to the standard Yee method for improvements in numerical dispersion characteristics and grid isotropy of the method. Explicit update equations and numerical dispersion expressions, and the stability criteria are derived. Also, several tools available to the standard Yee method such as PEC/PMC boundary conditions, absorbing boundary conditions, and scattered field formulation are extended to this method as well. A comparison between the FCC and the Yee formulations is made, showing that the FCC method exhibits better dispersion compared to its Yee counterpart. Simulations are provided to demonstrate both the accuracy and grid isotropy improvement of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogorelov, A. A.; Suslov, I. M.
2008-06-15
New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less
Study Methods to Standardize Thermography NDE
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.
1998-01-01
The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include various graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Keviar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.
Study Methods to Standardize Thermography NDE
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.
1998-01-01
The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Kevlar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.
NASA Astrophysics Data System (ADS)
Pacheco-Sanchez, Anibal; Claus, Martin; Mothes, Sven; Schröter, Michael
2016-11-01
Three different methods for the extraction of the contact resistance based on both the well-known transfer length method (TLM) and two variants of the Y-function method have been applied to simulation and experimental data of short- and long-channel CNTFETs. While for TLM special CNT test structures are mandatory, standard electrical device characteristics are sufficient for the Y-function methods. The methods have been applied to CNTFETs with low and high channel resistance. It turned out that the standard Y-function method fails to deliver the correct contact resistance in case of a relatively high channel resistance compared to the contact resistances. A physics-based validation is also given for the application of these methods based on applying traditional Si MOSFET theory to quasi-ballistic CNTFETs.
Processing of Nanostructured Devices Using Microfabrication Techniques
NASA Technical Reports Server (NTRS)
Xu, Jennifer C (Inventor); Kulis, Michael H (Inventor); Berger, Gordon M (Inventor); Hunter, Gary W (Inventor); Vander Wal, Randall L (Inventor); Evans, Laura J (Inventor)
2014-01-01
Systems and methods that incorporate nanostructures into microdevices are discussed herein. These systems and methods can allow for standard microfabrication techniques to be extended to the field of nanotechnology. Sensors incorporating nanostructures can be fabricated as described herein, and can be used to reliably detect a range of gases with high response.
Monitoring colony-level effects of sublethal pesticide exposure on honey bees
USDA-ARS?s Scientific Manuscript database
The effects of sublethal pesticide exposure to honey bee colonies may be significant but difficult to detect in the field using standard visual assessment methods. Here we describe methods to measure the quantities of adult bees, brood and food resources by weighing hives and hive parts, by photogra...
Martin, Jeffrey D.; Norman, Julia E.; Sandstrom, Mark W.; Rose, Claire E.
2017-09-06
U.S. Geological Survey monitoring programs extensively used two analytical methods, gas chromatography/mass spectrometry and liquid chromatography/mass spectrometry, to measure pesticides in filtered water samples during 1992–2012. In October 2012, the monitoring programs began using direct aqueous-injection liquid chromatography tandem mass spectrometry as a new analytical method for pesticides. The change in analytical methods, however, has the potential to inadvertently introduce bias in analysis of datasets that span the change.A field study was designed to document performance of the new method in a variety of stream-water matrices and to quantify any potential changes in measurement bias or variability that could be attributed to changes in analytical methods. The goals of the field study were to (1) summarize performance (bias and variability of pesticide recovery) of the new method in a variety of stream-water matrices; (2) compare performance of the new method in laboratory blank water (laboratory reagent spikes) to that in a variety of stream-water matrices; (3) compare performance (analytical recovery) of the new method to that of the old methods in a variety of stream-water matrices; (4) compare pesticide detections and concentrations measured by the new method to those of the old methods in a variety of stream-water matrices; (5) compare contamination measured by field blank water samples in old and new methods; (6) summarize the variability of pesticide detections and concentrations measured by the new method in field duplicate water samples; and (7) identify matrix characteristics of environmental water samples that adversely influence the performance of the new method. Stream-water samples and a variety of field quality-control samples were collected at 48 sites in the U.S. Geological Survey monitoring networks during June–September 2012. Stream sites were located across the United States and included sites in agricultural and urban land-use settings, as well as sites on major rivers.The results of the field study identified several challenges for the analysis and interpretation of data analyzed by both old and new methods, particularly when data span the change in methods and are combined for analysis of temporal trends in water quality. The main challenges identified are large (greater than 30 percent), statistically significant differences in analytical recovery, detection capability, and (or) measured concentrations for selected pesticides. These challenges are documented and discussed, but specific guidance or statistical methods to resolve these differences in methods are beyond the scope of the report. The results of the field study indicate that the implications of the change in analytical methods must be assessed individually for each pesticide and method.Understanding the possible causes of the systematic differences in concentrations between methods that remain after recovery adjustment might be necessary to determine how to account for the differences in data analysis. Because recoveries for each method are independently determined from separate reference standards and spiking solutions, the differences might be due to an error in one of the reference standards or solutions or some other basic aspect of standard procedure in the analytical process. Further investigation of the possible causes is needed, which will lead to specific decisions on how to compensate for these differences in concentrations in data analysis. In the event that further investigations do not provide insight into the causes of systematic differences in concentrations between methods, the authors recommend continuing to collect and analyze paired environmental water samples by both old and new methods. This effort should be targeted to seasons, sites, and expected concentrations to supplement those concentrations already assessed and to compare the ongoing analytical recovery of old and new methods to those observed in the summer and fall of 2012.
NASA Technical Reports Server (NTRS)
McFarland, Shane M.
2010-01-01
Field of view has always been a design feature paramount to helmet design, and in particular spacesuit design, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. Historically, suited field of view has been evaluated either qualitatively in parallel with design or quantitatively using various test methods and protocols. As such, oftentimes legacy suit field of view information is either ambiguous for lack of supporting data or contradictory to other field of view tests performed with different subjects and test methods. This paper serves to document a new field of view testing method that is more reliable and repeatable than its predecessors. It borrows heavily from standard ophthalmologic field of vision tests such as the Goldmann kinetic perimetry test, but is designed specifically for evaluating field of view of a spacesuit helmet. In this test, four suits utilizing three different helmet designs were tested for field of view. Not only do these tests provide more reliable field of view data for legacy and prototype helmet designs, they also provide insight into how helmet design impacts field of view and what this means for the Constellation Project spacesuit helmet, which must meet stringent field of view requirements that are more generous to the crewmember than legacy designs.
Standard model effective field theory: Integrating out neutralinos and charginos in the MSSM
NASA Astrophysics Data System (ADS)
Han, Huayong; Huo, Ran; Jiang, Minyuan; Shu, Jing
2018-05-01
We apply the covariant derivative expansion method to integrate out the neutralinos and charginos in the minimal supersymmetric Standard Model. The results are presented as set of pure bosonic dimension-six operators in the Standard Model effective field theory. Nontrivial chirality dependence in fermionic covariant derivative expansion is discussed carefully. The results are checked by computing the h γ γ effective coupling and the electroweak oblique parameters using the Standard Model effective field theory with our effective operators and direct loop calculation. In global fitting, the proposed lepton collider constraint projections, special phenomenological emphasis is paid to the gaugino mass unification scenario (M2≃2 M1) and anomaly mediation scenario (M1≃3.3 M2). These results show that the precision measurement experiments in future lepton colliders will provide a very useful complementary job in probing the electroweakino sector, in particular, filling the gap of the soft lepton plus the missing ET channel search left by the traditional collider, where the neutralino as the lightest supersymmetric particle is very degenerated with the next-to-lightest chargino/neutralino.
2012-01-01
Background The traditional Korean medical diagnoses employ pattern identification (PI), a diagnostic system that entails the comprehensive analysis of symptoms and signs. The PI needs to be standardized due to its ambiguity. Therefore, this study was performed to establish standard indicators of the PI for stroke through the traditional Korean medical literature, expert consensus and a clinical field test. Methods We sorted out stroke patterns with an expert committee organized by the Korean Institute of Oriental Medicine. The expert committee composed a document for a standardized pattern of identification for stroke based on the traditional Korean medical literature, and we evaluated the clinical significance of the document through a field test. Results We established five stroke patterns from the traditional Korean medical literature and extracted 117 indicators required for diagnosis. The indicators were evaluated by a field test and verified by the expert committee. Conclusions This study sought to develop indicators of PI based on the traditional Korean medical literature. This process contributed to the standardization of traditional Korean medical diagnoses. PMID:22410195
Superstatistics model for T₂ distribution in NMR experiments on porous media.
Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S
2014-07-01
We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.
Comparison of direct measurement methods for headset noise exposure in the workplace
Nassrallah, Flora G.; Giguère, Christian; Dajani, Hilmi R.; Ellaham, Nicolas N.
2016-01-01
The measurement of noise exposure from communication headsets poses a methodological challenge. Although several standards describe methods for general noise measurements in occupational settings, these are not directly applicable to noise assessments under communication headsets. For measurements under occluded ears, specialized methods have been specified by the International Standards Organization (ISO 11904) such as the microphone in a real ear and manikin techniques. Simpler methods have also been proposed in some national standards such as the use of general purpose artificial ears and simulators in conjunction with single number corrections to convert measurements to the equivalent diffuse field. However, little is known about the measurement agreement between these various methods and the acoustic manikin technique. Twelve experts positioned circum-aural, supra-aural and insert communication headsets on four different measurement setups (Type 1, Type 2, Type 3.3 artificial ears, and acoustic manikin). Fit-refit measurements of four audio communication signals were taken under quiet laboratory conditions. Data were transformed into equivalent diffuse-field sound levels using third-octave procedures. Results indicate that the Type 1 artificial ear is not suited for the measurement of sound exposure under communication headsets, while Type 2 and Type 3.3 artificial ears are in good agreement with the acoustic manikin technique. Single number corrections were found to introduce a large measurement uncertainty, making the use of the third-octave transformation preferable. PMID:26960783
Variational optical flow computation in real time.
Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph
2005-05-01
This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.
Jets and Metastability in Quantum Mechanics and Quantum Field Theory
NASA Astrophysics Data System (ADS)
Farhi, David
I give a high level overview of the state of particle physics in the introduction, accessible without any background in the field. I discuss improvements of theoretical and statistical methods used for collider physics. These include telescoping jets, a statistical method which was claimed to allow jet searches to increase their sensitivity by considering several interpretations of each event. We find that indeed multiple interpretations extend the power of searches, for both simple counting experiments and powerful multivariate fitting experiments, at least for h → bb¯ at the LHC. Then I propose a method for automation of background calculations using SCET by appropriating the technology of Monte Carlo generators such as MadGraph. In the third chapter I change gears and discuss the future of the universe. It has long been known that our pocket of the standard model is unstable; there is a lower-energy configuration in a remote part of the configuration space, to which our universe will, eventually, decay. While the timescales involved are on the order of 10400 years (depending on how exactly one counts) and thus of no immediate worry, I discuss the shortcomings of the standard methods and propose a more physically motivated derivation for the decay rate. I then make various observations about the structure of decays in quantum field theory.
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
Receptoral and Neural Aliasing.
1993-01-30
standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits
ERIC Educational Resources Information Center
Trief, Ellen; Cascella, Paul W.; Bruce, Susan M.
2013-01-01
Introduction: The study reported in this article tracked the learning rate of 43 children with multiple disabilities and visual impairments who had limited to no verbal language across seven months of classroom-based intervention using a standardized set of tangible symbols. Methods: The participants were introduced to tangible symbols on a daily…
Mark Hitchcock; Alan Ager
1992-01-01
National Forests in the Pacific Northwest Region have incorporated elk habitat standards into Forest plans to ensure that elk habitat objectives are met on multiple use land allocations. Many Forests have employed versions of the habitat effectiveness index (HEI) as a standard method to evaluate habitat. Field application of the HEI model unfortunately is a formidable...
ERIC Educational Resources Information Center
Laurson, Kelly R.; Welk, Gregory J.; Marton, Orsolya; Kaj, Mónika; Csányi, Tamás
2015-01-01
Purpose: This study examined agreement between all 3 standards (as well as relative diagnostic associations with metabolic syndrome) using a representative sample of youth from the Hungarian National Youth Fitness Study. Method: Body mass index (BMI) was assessed in a field sample of 2,352 adolescents (ages 10-18.5 years) and metabolic syndrome…
Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis
Zhang, Ting; Chen, Juan; Jia, Xiaofeng
2015-01-01
Background This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Methodology/Principal Findings Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006–2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006–2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including “natural products and polymers” with nine key technical points, “fermentation industry” with twelve ones, “electrical medical equipment” with four ones, and “diagnosis, surgery” with four ones. Conclusions/Significance The results of this study could provide guidance on the development direction of oncology, and also help researchers broaden innovative ideas and discover new technological opportunities. PMID:26599967
Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis.
Zhang, Ting; Chen, Juan; Jia, Xiaofeng
2015-01-01
This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006-2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006-2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including "natural products and polymers" with nine key technical points, "fermentation industry" with twelve ones, "electrical medical equipment" with four ones, and "diagnosis, surgery" with four ones. The results of this study could provide guidance on the development direction of oncology, and also help researchers broaden innovative ideas and discover new technological opportunities.
Verbal autopsy: current practices and challenges.
Soleman, Nadia; Chandramohan, Daniel; Shibuya, Kenji
2006-01-01
Cause-of-death data derived from verbal autopsy (VA) are increasingly used for health planning, priority setting, monitoring and evaluation in countries with incomplete or no vital registration systems. In some regions of the world it is the only method available to obtain estimates on the distribution of causes of death. Currently, the VA method is routinely used at over 35 sites, mainly in Africa and Asia. In this paper, we present an overview of the VA process and the results of a review of VA tools and operating procedures used at demographic surveillance sites and sample vital registration systems. We asked for information from 36 field sites about field-operating procedures and reviewed 18 verbal autopsy questionnaires and 10 cause-of-death lists used in 13 countries. The format and content of VA questionnaires, field-operating procedures, cause-of-death lists and the procedures to derive causes of death from VA process varied substantially among sites. We discuss the consequences of using varied methods and conclude that the VA tools and procedures must be standardized and reliable in order to make accurate national and international comparisons of VA data. We also highlight further steps needed in the development of a standard VA process. PMID:16583084
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...
2018-05-15
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; et al.
2018-01-26
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
NASA Astrophysics Data System (ADS)
Abe, M.; Prasannaa, V. S.; Das, B. P.
2018-03-01
Heavy polar diatomic molecules are currently among the most promising probes of fundamental physics. Constraining the electric dipole moment of the electron (e EDM ), in order to explore physics beyond the standard model, requires a synergy of molecular experiment and theory. Recent advances in experiment in this field have motivated us to implement a finite-field coupled-cluster (FFCC) approach. This work has distinct advantages over the theoretical methods that we had used earlier in the analysis of e EDM searches. We used relativistic FFCC to calculate molecular properties of interest to e EDM experiments, that is, the effective electric field (Eeff) and the permanent electric dipole moment (PDM). We theoretically determine these quantities for the alkaline-earth monofluorides (AEMs), the mercury monohalides (Hg X ), and PbF. The latter two systems, as well as BaF from the AEMs, are of interest to e EDM searches. We also report the calculation of the properties using a relativistic finite-field coupled-cluster approach with single, double, and partial triples' excitations, which is considered to be the gold standard of electronic structure calculations. We also present a detailed error estimate, including errors that stem from our choice of basis sets, and higher-order correlation effects.
Standards for reporting fish toxicity tests
Cope, O.B.
1961-01-01
The growing impetus of studies on fish and pesticides focuses attention on the need for standardized reporting procedures. Good methods have been developed for laboratory and field procedures in testing programs and in statistical features of assay experiments; and improvements are being made on methods of collecting and preserving fish, invertebrates, and other materials exposed to economic poisons. On the other had, the reporting of toxicity data in a complete manner has lagged behind, and today's literature is little improved over yesterday's with regard to completeness and susceptibility to interpretation.
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
NASA Astrophysics Data System (ADS)
Hikage, Chiaki; Koyama, Kazuya; Heavens, Alan
2017-08-01
We compute the power spectrum at one-loop order in standard perturbation theory for the matter density field to which a standard Lagrangian baryonic acoustic oscillation (BAO) reconstruction technique is applied. The BAO reconstruction method corrects the bulk motion associated with the gravitational evolution using the inverse Zel'dovich approximation (ZA) for the smoothed density field. We find that the overall amplitude of one-loop contributions in the matter power spectrum substantially decreases after reconstruction. The reconstructed power spectrum thereby approaches the initial linear spectrum when the smoothed density field is close enough to linear, i.e., the smoothing scale Rs≳10 h-1 Mpc . On smaller Rs, however, the deviation from the linear spectrum becomes significant on large scales (k ≲Rs-1 ) due to the nonlinearity in the smoothed density field, and the reconstruction is inaccurate. Compared with N-body simulations, we show that the reconstructed power spectrum at one-loop order agrees with simulations better than the unreconstructed power spectrum. We also calculate the tree-level bispectrum in standard perturbation theory to investigate non-Gaussianity in the reconstructed matter density field. We show that the amplitude of the bispectrum significantly decreases for small k after reconstruction and that the tree-level bispectrum agrees well with N-body results in the weakly nonlinear regime.
Precise SAR measurements in the near-field of RF antenna systems
NASA Astrophysics Data System (ADS)
Hakim, Bandar M.
Wireless devices must meet specific safety radiation limits, and in order to assess the health affects of such devices, standard procedures are used in which standard phantoms, tissue-equivalent liquids, and miniature electric field probes are used. The accuracy of such measurements depend on the precision in measuring the dielectric properties of the tissue-equivalent liquids and the associated calibrations of the electric-field probes. This thesis describes work on the theoretical modeling and experimental measurement of the complex permittivity of tissue-equivalent liquids, and associated calibration of miniature electric-field probes. The measurement method is based on measurements of the field attenuation factor and power reflection coefficient of a tissue-equivalent sample. A novel method, to the best of the authors knowledge, for determining the dielectric properties and probe calibration factors is described and validated. The measurement system is validated using saline at different concentrations, and measurements of complex permittivity and calibration factors have been made on tissue-equivalent liquids at 900MHz and 1800MHz. Uncertainty analysis have been conducted to study the measurement system sensitivity. Using the same waveguide to measure tissue-equivalent permittivity and calibrate e-field probes eliminates a source of uncertainty associated with using two different measurement systems. The measurement system is used to test GSM cell-phones at 900MHz and 1800MHz for Specific Absorption Rate (SAR) compliance using a Specific Anthropomorphic Mannequin phantom (SAM).
NASA Astrophysics Data System (ADS)
Hardiyanto, M.; Ermawaty, I. R.
2018-01-01
We present an experimental of muan-hadron tunneling chain investigation with new methods of Thx DUO2 nano structure based on Josephson’s tunneling and Abrikosov-Balseiro-Russel (ABR) formulation with quantum quadrupole interacting with a strongly localized high gyro-magnetic optical field as encountered in high-resolution near-field optical microscopy for 1.2 nano meter lambda-function. The strong gradients of these localized gyro-magnetic fields suggest that higher-order multipolar interactions will affect the standard magnetic quadrupole transition rates in 1.8 x 103 currie/mm fuel energy in nuclear moderator pool and selection rules with quatum dot. For muan-hadron absorption in Josephson’s tunnelling quantum quadrupole in the strong confinement limit we calculated the inter band of gyro-magnetic quadrupole absorption rate and the associated selection rules. Founded that the magnetic quadrupole absorption rate is comparable with the absorption rate calculated in the gyro-magneticdipole approximation of ThxDUO2 nano material structure. This implies that near-field optical techniques can extend the range of spectroscopic measurements for 545 MHz at quantum gyro-magnetic field until 561 MHz deployment quantum field at B around 455-485 tesla beyond the standard dipole approximation. However, we also show that spatial resolution could be improved by the selective excitation of ABR formulation in quantum quadrupole transitions.
Johnston, Jennifer M.
2014-01-01
The majority of biological processes mediated by G Protein-Coupled Receptors (GPCRs) take place on timescales that are not conveniently accessible to standard molecular dynamics (MD) approaches, notwithstanding the current availability of specialized parallel computer architectures, and efficient simulation algorithms. Enhanced MD-based methods have started to assume an important role in the study of the rugged energy landscape of GPCRs by providing mechanistic details of complex receptor processes such as ligand recognition, activation, and oligomerization. We provide here an overview of these methods in their most recent application to the field. PMID:24158803
NASA Astrophysics Data System (ADS)
Yilmaz, Hasan
2016-03-01
Structured illumination enables high-resolution fluorescence imaging of nanostructures [1]. We demonstrate a new high-resolution fluorescence imaging method that uses a scattering layer with a high-index substrate as a solid immersion lens [2]. Random scattering of coherent light enables a speckle pattern with a very fine structure that illuminates the fluorescent nanospheres on the back surface of the high-index substrate. The speckle pattern is raster-scanned over the fluorescent nanospheres using a speckle correlation effect known as the optical memory effect. A series of standard-resolution fluorescence images per each speckle pattern displacement are recorded by an electron-multiplying CCD camera using a commercial microscope objective. We have developed a new phase-retrieval algorithm to reconstruct a high-resolution, wide-field image from several standard-resolution wide-field images. We have introduced phase information of Fourier components of standard-resolution images as a new constraint in our algorithm which discards ambiguities therefore ensures convergence to a unique solution. We demonstrate two-dimensional fluorescence images of a collection of nanospheres with a deconvolved Abbe resolution of 116 nm and a field of view of 10 µm × 10 µm. Our method is robust against optical aberrations and stage drifts, therefore excellent for imaging nanostructures under ambient conditions. [1] M. G. L. Gustafsson, J. Microsc. 198, 82-87 (2000). [2] H. Yilmaz, E. G. van Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, Optica 2, 424-429 (2015).
Informatics and Standards for Nanomedicine Technology
Thomas, Dennis G.; Klaessig, Fred; Harper, Stacey L.; Fritts, Martin; Hoover, Mark D.; Gaheen, Sharon; Stokes, Todd H.; Reznik-Zellen, Rebecca; Freund, Elaine T.; Klemm, Juli D.; Paik, David S.; Baker, Nathan A.
2011-01-01
There are several issues to be addressed concerning the management and effective use of information (or data), generated from nanotechnology studies in biomedical research and medicine. These data are large in volume, diverse in content, and are beset with gaps and ambiguities in the description and characterization of nanomaterials. In this work, we have reviewed three areas of nanomedicine informatics: information resources; taxonomies, controlled vocabularies, and ontologies; and information standards. Informatics methods and standards in each of these areas are critical for enabling collaboration, data sharing, unambiguous representation and interpretation of data, semantic (meaningful) search and integration of data; and for ensuring data quality, reliability, and reproducibility. In particular, we have considered four types of information standards in this review, which are standard characterization protocols, common terminology standards, minimum information standards, and standard data communication (exchange) formats. Currently, due to gaps and ambiguities in the data, it is also difficult to apply computational methods and machine learning techniques to analyze, interpret and recognize patterns in data that are high dimensional in nature, and also to relate variations in nanomaterial properties to variations in their chemical composition, synthesis, characterization protocols, etc. Progress towards resolving the issues of information management in nanomedicine using informatics methods and standards discussed in this review will be essential to the rapidly growing field of nanomedicine informatics. PMID:21721140
Assessment of Three “WHO” Patient Safety Solutions: Where Do We Stand and What Can We Do?
Banihashemi, Sheida; Hatam, Nahid; Zand, Farid; Kharazmi, Erfan; Nasimi, Soheila; Askarian, Mehrdad
2015-01-01
Background: Most medical errors are preventable. The aim of this study was to compare the current execution of the 3 patient safety solutions with WHO suggested actions and standards. Methods: Data collection forms and direct observation were used to determine the status of implementation of existing protocols, resources, and tools. Results: In the field of patient hand-over, there was no standardized approach. In the field of the performance of correct procedure at the correct body site, there were no safety checklists, guideline, and educational content for informing the patients and their families about the procedure. In the field of hand hygiene (HH), although availability of necessary resources was acceptable, availability of promotional HH posters and reminders was substandard. Conclusions: There are some limitations of resources, protocols, and standard checklists in all three areas. We designed some tools that will help both wards to improve patient safety by the implementation of adapted WHO suggested actions. PMID:26900434
Student Diversity Requires Different Approaches to College Teaching, Even in Math and Science.
ERIC Educational Resources Information Center
Nelson, Craig E.
1996-01-01
Asserts that traditional teaching methods are unintentionally biased towards the elite and against many non-traditional students. Outlines several easily accessible changes in teaching methods that have fostered dramatic changes in student performance with no change in standards. These approaches have proven effective even in the fields of…
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
A spectral k-means approach to bright-field cell image segmentation.
Bradbury, Laura; Wan, Justin W L
2010-01-01
Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.
Upcoming new international measurement standards in the field of building acoustics
NASA Astrophysics Data System (ADS)
Goydke, Hans
2002-11-01
The extensively completed revision of most of the ISO measurement standards in building acoustics mainly initiated by the European Commissions demand for harmonized standards emphasized the insight that the main goal to avoid trade barriers between the countries can only be reached when the standards sufficiently and comprehensively cover the field when they are related to the actual state of the art and when they are sufficiently related to practice. In modern architecture one can observe the rapid change in the use of building materials, for instance regarding the use of glass. Lightweight constructions as well as heavyweight building elements with additional linings are increasingly in common use and unquestionably there are consequences to be considered regarding the ascertainment of sound insulation properties. Besides others, International Standardization is unsatisfactory regarding the assessment of noise in buildings from waste water installations, in the low frequency area and in general regarding the expression of uncertainty of measurements. Intensity measurements in building acoustics, rainfall noise assessment, estimation of sound insulation, impulse response measurement methods, assessment of sound scattering are examples of upcoming standards.
Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis
NASA Astrophysics Data System (ADS)
Perttu, A. B.; Taisne, B.
2016-12-01
Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.
1976-06-01
ie irom Ott - Ae wastewatt:r. Data obtained by the NWC-developed method of analyds and field equipment ccupare favorably with data obtained by a vapor...5, curve &. A microaliquot of standard PIM solution is then added to the cell solution and the procedure is repeated. This is known as the standard
Time-domain near-field/near-field transform with PWS operations
NASA Astrophysics Data System (ADS)
Ravelo, B.; Liu, Y.; Slama, J. Ben Hadj
2011-03-01
This article deals with the development of computation method dedicated to the extraction of the transient EM-near-field at certain distance from the given 2D data for the baseband application up to GHz. As described in the methodological analysis, it is based on the use of fft combined with the plane wave spectrum (PWS) operation. In order to verify the efficiency of the introduced method, a radiating source formed by the combination of electric dipoles excited by a short duration transient pulse current with a spectrum bandwidth of about 5 GHz is considered. It was shown that compared to the direct calculation, one gets the same behaviors of magnetic near-field components Hx, Hy and Hz with the presented extraction method, in the planes placed at {3 mm, 8 mm, 13 mm} of the initial reference plane. To confirm the relevance of the proposed transform, validation with a standard commercial tool was performed. In future, we envisage to exploit the proposed computation method to predict the transient electromagnetic (EM) field emissions notably in the microwave electronic devices for the EMC applications.
Magnetic field shift due to mechanical vibration in functional magnetic resonance imaging.
Foerster, Bernd U; Tomasi, Dardo; Caparelli, Elisabeth C
2005-11-01
Mechanical vibrations of the gradient coil system during readout in echo-planar imaging (EPI) can increase the temperature of the gradient system and alter the magnetic field distribution during functional magnetic resonance imaging (fMRI). This effect is enhanced by resonant modes of vibrations and results in apparent motion along the phase encoding direction in fMRI studies. The magnetic field drift was quantified during EPI by monitoring the resonance frequency interleaved with the EPI acquisition, and a novel method is proposed to correct the apparent motion. The knowledge on the frequency drift over time was used to correct the phase of the k-space EPI dataset. Since the resonance frequency changes very slowly over time, two measurements of the resonance frequency, immediately before and after the EPI acquisition, are sufficient to remove the field drift effects from fMRI time series. The frequency drift correction method was tested "in vivo" and compared to the standard image realignment method. The proposed method efficiently corrects spurious motion due to magnetic field drifts during fMRI. (c) 2005 Wiley-Liss, Inc.
Measurement of Antenna Bore-Sight Gain
NASA Technical Reports Server (NTRS)
Fortinberry, Jarrod; Shumpert, Thomas
2016-01-01
The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.
Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
High Order Filter Methods for the Non-ideal Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.
Divergence Free High Order Filter Methods for the Compressible MHD Equations
NASA Technical Reports Server (NTRS)
Yea, H. C.; Sjoegreen, Bjoern
2003-01-01
The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.
The reduced basis method for the electric field integral equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less
Hello World! - Experiencing Usability Methods without Usability Expertise
NASA Astrophysics Data System (ADS)
Eriksson, Elina; Cajander, Åsa; Gulliksen, Jan
How do you do usability work when no usability expertise is available? What happens in an organization when system developers, with no previous HCI knowledge, after a 3-day course, start applying usability methods, and particularly field studies? In order to answer these questions qualitative data were gathered through participatory observations, a feed back survey, field study documentation and interviews from 47 system developers from a public authority. Our results suggest that field studies enhance the developer’s understanding of the user perspective, and provide a more holistic overview of the use situation, but that some developers were unable to interpret their observations and see solutions to the users’ problems. The field study method was very much appreciated and has now become standard operating procedure within the organization. However, although field studies may be useful, it does not replace the need for usability pro fes sion als, as their knowledge is essential for more complex observations, analysis and for keeping the focus on usability.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
NASA Astrophysics Data System (ADS)
Schießl, Stefan P.; Rother, Marcel; Lüttgens, Jan; Zaumseil, Jana
2017-11-01
The field-effect mobility is an important figure of merit for semiconductors such as random networks of single-walled carbon nanotubes (SWNTs). However, owing to their network properties and quantum capacitance, the standard models for field-effect transistors cannot be applied without modifications. Several different methods are used to determine the mobility with often very different results. We fabricated and characterized field-effect transistors with different polymer-sorted, semiconducting SWNT network densities ranging from low (≈6 μm-1) to densely packed quasi-monolayers (≈26 μm-1) with a maximum on-conductance of 0.24 μS μm-1 and compared four different techniques to evaluate the field-effect mobility. We demonstrate the limits and requirements for each method with regard to device layout and carrier accumulation. We find that techniques that take into account the measured capacitance on the active device give the most reliable mobility values. Finally, we compare our experimental results to a random-resistor-network model.
The joint use of the tangential electric field and surface Laplacian in EEG classification.
Carvalhaes, C G; de Barros, J Acacio; Perreau-Guimaraes, M; Suppes, P
2014-01-01
We investigate the joint use of the tangential electric field (EF) and the surface Laplacian (SL) derivation as a method to improve the classification of EEG signals. We considered five classification tasks to test the validity of such approach. In all five tasks, the joint use of the components of the EF and the SL outperformed the scalar potential. The smallest effect occurred in the classification of a mental task, wherein the average classification rate was improved by 0.5 standard deviations. The largest effect was obtained in the classification of visual stimuli and corresponded to an improvement of 2.1 standard deviations.
Development of wheelchair caster testing equipment and preliminary testing of caster models
Mhatre, Anand; Ott, Joseph
2017-01-01
Background Because of the adverse environmental conditions present in less-resourced environments (LREs), the World Health Organization (WHO) has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO) Wheelchair Testing Standards (ISO 7176). Objectives To develop and demonstrate the feasibility of a caster system test method. Method Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures. Results The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures. Conclusion The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE. PMID:29062762
Assessment of Proficiency and Competency in Laboratory Animal Biomethodologies
Clifford, Paula; Melfi, Natasha; Bogdanske, John; Johnson, Elizabeth J; Kehler, James; Baran, Szczepan W
2013-01-01
Personnel working with laboratory animals are required by laws and guidelines to be trained and qualified to perform biomethodologic procedures. The assessment of competency and proficiency is a vital component of a laboratory animal training program, because this process confirms that the trainees have met the learning objectives for a particular procedure. The approach toward qualification assessment differs between organizations because laws and guidelines do not outline how the assessment should be performed or which methods and tools should be used. Assessment of clinical and surgical medicine has received considerable attention over the last few decades and has progressed from simple subjective methods to well-defined and objective methods of assessing competency. Although biomethodology competency and proficiency assessment is discussed in the literature, a standard and objective assessment method has not yet been developed. The development and implementation of an objective and standardized biomethodologic assessment program can serve as a tool to improve standards, ensure consistent training, and decrease research variables yet ensure animal welfare. Here we review the definition and goals of training and assessment, review assessment methods, and propose a method to develop a standard and objective assessment program for the laboratory animal science field, particularly training departments and IACUC. PMID:24351758
Visual field defects after temporal lobe resection for epilepsy.
Steensberg, Alvilda T; Olsen, Ane Sophie; Litman, Minna; Jespersen, Bo; Kolko, Miriam; Pinborg, Lars H
2018-01-01
To determine visual field defects (VFDs) using methods of varying complexity and compare results with subjective symptoms in a population of newly operated temporal lobe epilepsy patients. Forty patients were included in the study. Two patients failed to perform VFD testing. Humphrey Field Analyzer (HFA) perimetry was used as the gold standard test to detect VFDs. All patients performed a web-based visual field test called Damato Multifixation Campimetry Online (DMCO). A bedside confrontation visual field examination ad modum Donders was extracted from the medical records in 27/38 patients. All participants had a consultation by an ophthalmologist. A questionnaire described the subjective complaints. A VFD in the upper quadrant was demonstrated with HFA in 29 (76%) of the 38 patients after surgery. In 27 patients tested ad modum Donders, the sensitivity of detecting a VFD was 13%. Eight patients (21%) had a severe VFD similar to a quadrant anopia, thus, questioning their permission to drive a car. In this group of patients, a VFD was demonstrated in one of five (sensitivity=20%) ad modum Donders and in seven of eight (sensitivity=88%) with DMCO. Subjective symptoms were only reported by 28% of the patients with a VFD and in two of eight (sensitivity=25%) with a severe VFD. Most patients (86%) considered VFD information mandatory. VFD continue to be a frequent adverse event after epilepsy surgery in the medial temporal lobe and may affect the permission to drive a car in at least one in five patients. Subjective symptoms and bedside visual field testing ad modum Donders are not sensitive to detect even a severe VFD. Newly developed web-based visual field test methods appear sensitive to detect a severe VFD but perimetry remains the golden standard for determining if visual standards for driving is fulfilled. Patients consider VFD information as mandatory. Copyright © 2017. Published by Elsevier Ltd.
Error analysis regarding the calculation of nonlinear force-free field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.
2012-02-01
Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.
Thermal x-ray diffraction and near-field phase contrast imaging
NASA Astrophysics Data System (ADS)
Li, Zheng; Classen, Anton; Peng, Tao; Medvedev, Nikita; Wang, Fenglin; Chapman, Henry N.; Shih, Yanhua
2017-10-01
Using higher-order coherence of thermal light sources, the resolution power of standard x-ray imaging techniques can be enhanced. In this work, we applied the higher-order measurement to far-field x-ray diffraction and near-field phase contrast imaging (PCI), in order to achieve superresolution in x-ray diffraction and obtain enhanced intensity contrast in PCI. The cost of implementing such schemes is minimal compared to the methods that achieve similar effects by using entangled x-ray photon pairs.
Thermal x-ray diffraction and near-field phase contrast imaging
Li, Zheng; Classen, Anton; Peng, Tao; ...
2017-12-27
Using higher-order coherence of thermal light sources, the resolution power of standard x-ray imaging techniques can be enhanced. Here in this work, we applied the higher-order measurement to far-field x-ray diffraction and near-field phase contrast imaging (PCI), in order to achieve superresolution in x-ray diffraction and obtain enhanced intensity contrast in PCI. The cost of implementing such schemes is minimal compared to the methods that achieve similar effects by using entangled x-ray photon pairs.
Enhancing Food Processing by Pulsed and High Voltage Electric Fields: Principles and Applications.
Wang, Qijun; Li, Yifei; Sun, Da-Wen; Zhu, Zhiwei
2018-02-02
Improvements in living standards result in a growing demand for food with high quality attributes including freshness, nutrition and safety. However, current industrial processing methods rely on traditional thermal and chemical methods, such as sterilization and solvent extraction, which could induce negative effects on food quality and safety. The electric fields (EFs) involving pulsed electric fields (PEFs) and high voltage electric fields (HVEFs) have been studied and developed for assisting and enhancing various food processes. In this review, the principles and applications of pulsed and high voltage electric fields are described in details for a range of food processes, including microbial inactivation, component extraction, and winemaking, thawing and drying, freezing and enzymatic inactivation. Moreover, the advantages and limitations of electric field related technologies are discussed to foresee future developments in the food industry. This review demonstrates that electric field technology has a great potential to enhance food processing by supplementing or replacing the conventional methods employed in different food manufacturing processes. Successful industrial applications of electric field treatments have been achieved in some areas such as microbial inactivation and extraction. However, investigations of HVEFs are still in an early stage and translating the technology into industrial applications need further research efforts.
Design and Control of Chemical Grouting : Volume 1 - Construction Control
DOT National Transportation Integrated Search
1983-04-01
This report presents the results of a laboratory and field research program investigating innovative method for design and control of chemical grouting in soils. Chemical grouting practice is reviewed and standard evaluation and measurement technique...
EPA Field Manual for Coral Reef Assessments
The Water Quality Research Program (WQRP) supports development of coral reef biological criteria. Research is focused on developing methods and tools to support implementation of legally defensible biological standards for maintaining biological integrity, which is protected by ...
Quantifying the heterogeneity of the tectonic stress field using borehole data
Schoenball, Martin; Davatzes, Nicholas C.
2017-01-01
The heterogeneity of the tectonic stress field is a fundamental property which influences earthquake dynamics and subsurface engineering. Self-similar scaling of stress heterogeneities is frequently assumed to explain characteristics of earthquakes such as the magnitude-frequency relation. However, observational evidence for such scaling of the stress field heterogeneity is scarce.We analyze the local stress orientations using image logs of two closely spaced boreholes in the Coso Geothermal Field with sub-vertical and deviated trajectories, respectively, each spanning about 2 km in depth. Both the mean and the standard deviation of stress orientation indicators (borehole breakouts, drilling-induced fractures and petal-centerline fractures) determined from each borehole agree to the limit of the resolution of our method although measurements at specific depths may not. We find that the standard deviation in these boreholes strongly depends on the interval length analyzed, generally increasing up to a wellbore log length of about 600 m and constant for longer intervals. We find the same behavior in global data from the World Stress Map. This suggests that the standard deviation of stress indicators characterizes the heterogeneity of the tectonic stress field rather than the quality of the stress measurement. A large standard deviation of a stress measurement might be an expression of strong crustal heterogeneity rather than of an unreliable stress determination. Robust characterization of stress heterogeneity requires logs that sample stress indicators along a representative sample volume of at least 1 km.
Method for quick thermal tolerancing of optical systems
NASA Astrophysics Data System (ADS)
Werschnik, J.; Uhlendorf, K.
2016-09-01
Optical systems for lithography (projection lens), inspection (micro-objectives) or laser material processing usually have tight specifications regarding focus and wave-front stability. The same is true regarding the field dependent properties. Especially projection lenses have tight specifications on field curvature, magnification and distortion. Unwanted heating either from internal or external sources lead to undesired changes of the above properties. In this work we show an elegant and fast method to analyze the thermal sensitivity using ZEMAX. The key point of this method is using the thermal changes of the lens data from the multi-configuration editor as starting point for a (standard) tolerance analysis. Knowing the sensitivity we can either define requirements on the environment or use it to systematically improve the thermal behavior of the lens. We demonstrate this method for a typical projection lens for which we optimized the thermal field curvature to a minimum.
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
A simple web-based tool to compare freshwater fish data collected using AFS standard methods
Bonar, Scott A.; Mercado-Silva, Norman; Rahr, Matt; Torrey, Yuta T.; Cate, Averill
2016-01-01
The American Fisheries Society (AFS) recently published Standard Methods for Sampling North American Freshwater Fishes. Enlisting the expertise of 284 scientists from 107 organizations throughout Canada, Mexico, and the United States, this text was developed to facilitate comparisons of fish data across regions or time. Here we describe a user-friendly web tool that automates among-sample comparisons in individual fish condition, population length-frequency distributions, and catch per unit effort (CPUE) data collected using AFS standard methods. Currently, the web tool (1) provides instantaneous summaries of almost 4,000 data sets of condition, length frequency, and CPUE of common freshwater fishes collected using standard gears in 43 states and provinces; (2) is easily appended with new standardized field data to update subsequent queries and summaries; (3) compares fish data from a particular water body with continent, ecoregion, and state data summaries; and (4) provides additional information about AFS standard fish sampling including benefits, ongoing validation studies, and opportunities to comment on specific methods. The web tool—programmed in a PHP-based Drupal framework—was supported by several AFS Sections, agencies, and universities and is freely available from the AFS website and fisheriesstandardsampling.org. With widespread use, the online tool could become an important resource for fisheries biologists.
Electrostatic risk to reticles in the nanolithography era
NASA Astrophysics Data System (ADS)
Rider, Gavin C.
2016-04-01
Reticles can be damaged by electric field as well as by the conductive transfer of charge. As device feature sizes have moved from the micro- into the nano-regime, reticle sensitivity to electric field has been increasing owing to the physics of field induction. Hence, the predominant risk to production reticles today is from exposure to electric field. Measurements of electric field that illustrate the extreme risk faced by today's production reticles are presented. It is shown that some of the standard methods used for prevention of electrostatic discharge in semiconductor manufacturing, being based on controlling static charge and voltage, do not offer reticles adequate protection against electric field. In some cases, they actually increase the risk of reticle damage. Methodology developed specifically to protect reticles against electric field is required, which is described in SEMI Standard E163. Measurements are also presented showing that static dissipative plastic is not an ideal material to use for the construction of reticle pods as it both generates and transmits transient electric field. An appropriate combination of insulating material and metallic shielding is shown to provide the best electrostatic protection for reticles, with fail-safe protection only being possible if the reticle is fully shielded within a metal Faraday cage.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.
2001-01-01
This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.
Duct Leakage Repeatability Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain; Sherman, Max
2014-08-01
The purpose of this report is to evaluate the repeatability of the three most significant measurement techniques for duct leakage using data from the literature and recently obtained field data. We will also briefly discuss the first two factors. The main question to be answered by this study is to determine if differences in the repeatability of these tests methods is sufficient to indicate that any of these methods is so poor that it should be excluded from consideration as an allowed procedure in codes and standards. The three duct leak measurement methods assessed in this report are the twomore » duct pressurization methods that are commonly used by many practitioners and the DeltaQ technique. These are methods B, C and A, respectively of the ASTM E1554 standard. Although it would be useful to evaluate other duct leak test methods, this study focused on those test methods that are commonly used and are required in various test standards, such as BPI (2010), RESNET (2014), ASHRAE 62.2 (2013), California Title 24 (CEC 2012), DOE Weatherization and many other energy efficiency programs.« less
Theory of wide-angle photometry from standard stars
NASA Technical Reports Server (NTRS)
Usher, Peter D.
1989-01-01
Wide angle celestial structures, such as bright comet tails and nearby galaxies and clusters of galaxies, rely on photographic methods for quantified morphology and photometry, primarily because electronic devices with comparable resolution and sky coverage are beyond current technological capability. The problem of the photometry of extended structures and of how this problem may be overcome through calibration by photometric standard stars is examined. The perfect properties of the ideal field of view are stated in the guise of a radiometric paraxial approximation, in the hope that fields of view of actual telescopes will conform. Fundamental radiometric concepts are worked through before the issue of atmospheric attenuation is addressed. The independence of observed atmospheric extinction and surface brightness leads off the quest for formal solutions to the problem of surface photometry. Methods and problems of solution are discussed. The spectre is confronted in the spirit of standard stars and shown to be chimerical in that light, provided certain rituals are adopted. After a brief discussion of Baker-Sampson polynomials and the vexing issue of saturation, a pursuit is made of actual numbers to be expected in real cases. While the numbers crunched are gathered ex nihilo, they demonstrate the feasibility of Newton's method in the solution of this overdetermined, nonlinear, least square, multiparametric, photometric problem.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Current status of antifungal susceptibility testing methods.
Arikan, Sevtap
2007-11-01
Antifungal susceptibility testing is a very dynamic field of medical mycology. Standardization of in vitro susceptibility tests by the Clinical and Laboratory Standards Institute (CLSI) and the European Committee for Antimicrobial Susceptibility Testing (EUCAST), and current availability of reference methods constituted the major remarkable steps in the field. Based on the established minimum inhibitory concentration (MIC) breakpoints, it is now possible to determine the susceptibilities of Candida strains to fluconazole, itraconazole, voriconazole, and flucytosine. Moreover, utility of fluconazole antifungal susceptibility tests as an adjunct in optimizing treatment of candidiasis has now been validated. While the MIC breakpoints and clinical significance of susceptibility testing for the remaining fungi and antifungal drugs remain yet unclear, modifications of the available methods as well as other methodologies are being intensively studied to overcome the present drawbacks and limitations. Among the other methods under investigation are Etest, colorimetric microdilution, agar dilution, determination of fungicidal activity, flow cytometry, and ergosterol quantitation. Etest offers the advantage of practical application and favorable agreement rates with the reference methods that are frequently above acceptable limits. However, MIC breakpoints for Etest remain to be evaluated and established. Development of commercially available, standardized colorimetric panels that are based on CLSI method parameters has added more to the antifungal susceptibility testing armamentarium. Flow cytometry, on the other hand, appears to offer rapid susceptibility testing but requires specified equipment and further evaluation for reproducibility and standardization. Ergosterol quantitation is another novel approach, which appears potentially beneficial particularly in discrimination of azole-resistant isolates from heavy trailers. The method is yet investigational and requires to be further studied. Developments in methodology and applications of antifungal susceptibility testing will hopefully provide enhanced utility in clinical guidance of antifungal therapy. However, and particularly in immunosuppressed host, in vitro susceptibility is and will remain only one of several factors that influence clinical outcome.
NASA Astrophysics Data System (ADS)
Anderson, K.; Dungan, J. L.
2008-12-01
One of the biggest challenges in the use of proximal remote sensing methods continues to be the accurate, reproducible characterisation of natural surface reflectance properties measured in the solar radiation environment. Complexities in such measurements arise from differences in instrument type, field-of-view, atmospheric conditions, solar illumination and measurement angles and uncertainty in the calibration of reference sources used. Three GER 1500 spectroradiometers were used to measure the reflectance of a short sward grass canopy. A full laboratory assessment was first carried out to characterise instrument uncertainty. Wavelength-dependent patterns in noise equivalent delta radiances (NEdL) were similar for all three instruments, (less than 1 W m-2 sr μm-1 in the range 400-1000 nm). The spectroradiometers were then used in the field, each in a single-beam configuration and two in a dual-field- of-view configuration to compare their field reproducibility to laboratory measurements. Hemispherical-conical reflectance factors (HCRF) were collected in clear sky conditions of a grass (Pennisetum clandestinum) canopy. Two measurement dates were used where skies were clean (diffuse-to-global (DG) irradiance ratios < 0.13) and stable (standard deviation in DG < 0.001). Spectra were collected at nadir during the two- hour period spanning solar noon with a 2° range in solar zenith angles. A reproducible method was used which enabled positioning of instruments to within 1° precision in the azimuthal direction and with no movement in zenithal position. Ten measurements were taken with each sensor head, from a calibrated optical grade Spectralon panel (99% reflecting), the grass target, and a control surface -- a grey, 75% Spectralon panel. After each sequence the sensor heads were changed. On each date, the sequence of measurements was repeated. Field results showed standard uncertainties (u) of less than 0.01 (SD in HCRF) for the grey panel and less than 0.015 for vegetation. The grey panel data showed a wavelength- dependent pattern, similar to the NEdL laboratory trend, but subsequent error propagation of laboratory- derived NEdL through to a reflectance factor showed that the laboratory characterisation was unable to account for all of the uncertainty measured in the field. Therefore the estimate of u gained from field data more closely represents the reproducibility of measurements where atmospheric, solar zenith and instrument-related uncertainties are combined. Results on vegetation u showed a stronger wavelength dependency with higher standard uncertainties beyond the vegetation red-edge than in visible wavelengths (maximum = 0.015 at 800 nm, and 0.004 at 550nm). The results demonstrate that standard uncertainties of field reflectance data have a spectral dependence and exceed laboratory-derived estimates of instrument "noise". Uncertainty of this type must be taken into account when statistically testing for differences in field spectra. Improved reporting of standard uncertainties from field experiments will foster progress in remote sensing science.
The purpose of this SOP is to define the appropriate method for completing scannable forms generated by Teleform. The instructions describe methods of form completion and how to indicate that a response is not valid. Scannable Forms are used in the field and laboratory portion o...
Proof of factorization using background field method of QCD
NASA Astrophysics Data System (ADS)
Nayak, Gouranga C.
2010-02-01
Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.
Proof of factorization using background field method of QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nayak, Gouranga C.
Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.
Computer analysis of multicircuit shells of revolution by the field method
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1975-01-01
The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.
Field Testing of Compartmentalization Methods for Multifamily Construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, K.; Lstiburek, J. W.
2015-03-01
The 2012 International Energy Conservation Code (IECC) has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure (3 ACH50) for single-family and multifamily construction (in climate zones 3–8). The Leadership in Energy & Environmental Design certification program and ASHRAE Standard 189 have comparable compartmentalization requirements. ASHRAE Standard 62.2 will soon be responsible for all multifamily ventilation requirements (low rise and high rise); it has an exceptionally stringent compartmentalization requirement. These code and program requirements are driving the need for easier and more effective methods of compartmentalization in multifamily buildings.
NASA Technical Reports Server (NTRS)
Hughes, Vernon W.
1959-01-01
The use of a rotational state transition as observed by the molecular beam electric resonance method is discussed as a possible frequency standard particularly in the millimeter wavelength range. As a promising example the 100 kMc transition between the J = 0 and J = 1 rotational states of Li 6F19 is considered. The relative insensitivity of the transition frequency to external electric and magnetic fields and the low microwave power requirements appear favorable; the small fraction of the molecular beam that is in a single rotational state is a limiting factor.
Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.
Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David
2015-08-01
Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).
Laboratory and field based evaluation of chromatography ...
The Monitor for AeRosols and GAses in ambient air (MARGA) is an on-line ion-chromatography-based instrument designed for speciation of the inorganic gas and aerosol ammonium-nitrate-sulfate system. Previous work to characterize the performance of the MARGA has been primarily based on field comparison to other measurement methods to evaluate accuracy. While such studies are useful, the underlying reasons for disagreement among methods are not always clear. This study examines aspects of MARGA accuracy and precision specifically related to automated chromatography analysis. Using laboratory standards, analytical accuracy, precision, and method detection limits derived from the MARGA chromatography software are compared to an alternative software package (Chromeleon, Thermo Scientific Dionex). Field measurements are used to further evaluate instrument performance, including the MARGA’s use of an internal LiBr standard to control accuracy. Using gas/aerosol ratios and aerosol neutralization state as a case study, the impact of chromatography on measurement error is assessed. The new generation of on-line chromatography-based gas and particle measurement systems have many advantages, including simultaneous analysis of multiple pollutants. The Monitor for Aerosols and Gases in Ambient Air (MARGA) is such an instrument that is used in North America, Europe, and Asia for atmospheric process studies as well as routine monitoring. While the instrument has been evaluat
Multistage morphological segmentation of bright-field and fluorescent microscopy images
NASA Astrophysics Data System (ADS)
Korzyńska, A.; Iwanowski, M.
2012-06-01
This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".
Characterization of YBa2Cu3O7, including critical current density Jc, by trapped magnetic field
NASA Technical Reports Server (NTRS)
Chen, In-Gann; Liu, Jianxiong; Weinstein, Roy; Lau, Kwong
1992-01-01
Spatial distributions of persistent magnetic field trapped by sintered and melt-textured ceramic-type high-temperature superconductor (HTS) samples have been studied. The trapped field can be reproduced by a model of the current consisting of two components: (1) a surface current Js and (2) a uniform volume current Jv. This Js + Jv model gives a satisfactory account of the spatial distribution of the magnetic field trapped by different types of HTS samples. The magnetic moment can be calculated, based on the Js + Jv model, and the result agrees well with that measured by standard vibrating sample magnetometer (VSM). As a consequence, Jc predicted by VSM methods agrees with Jc predicted from the Js + Jv model. The field mapping method described is also useful to reveal the granular structure of large HTS samples and regions of weak links.
NASA Technical Reports Server (NTRS)
Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)
2001-01-01
The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.
Inter-laboratory Comparison of Three Earplug Fit-test Systems
Byrne, David C.; Murphy, William J.; Krieg, Edward F.; Ghent, Robert M.; Michael, Kevin L.; Stefanson, Earl W.; Ahroon, William A.
2017-01-01
The National Institute for Occupational Safety and Health (NIOSH) sponsored tests of three earplug fit-test systems (NIOSH HPD Well-Fit™, Michael & Associates FitCheck, and Honeywell Safety Products VeriPRO®). Each system was compared to laboratory-based real-ear attenuation at threshold (REAT) measurements in a sound field according to ANSI/ASA S12.6-2008 at the NIOSH, Honeywell Safety Products, and Michael & Associates testing laboratories. An identical study was conducted independently at the U.S. Army Aeromedical Research Laboratory (USAARL), which provided their data for inclusion in this report. The Howard Leight Airsoft premolded earplug was tested with twenty subjects at each of the four participating laboratories. The occluded fit of the earplug was maintained during testing with a soundfield-based laboratory REAT system as well as all three headphone-based fit-test systems. The Michael & Associates lab had highest average A-weighted attenuations and smallest standard deviations. The NIOSH lab had the lowest average attenuations and the largest standard deviations. Differences in octave-band attenuations between each fit-test system and the American National Standards Institute (ANSI) sound field method were calculated (Attenfit-test - AttenANSI). A-weighted attenuations measured with FitCheck and HPD Well-Fit systems demonstrated approximately ±2 dB agreement with the ANSI sound field method, but A-weighted attenuations measured with the VeriPRO system underestimated the ANSI laboratory attenuations. For each of the fit-test systems, the average A-weighted attenuation across the four laboratories was not significantly greater than the average of the ANSI sound field method. Standard deviations for residual attenuation differences were about ±2 dB for FitCheck and HPD Well-Fit compared to ±4 dB for VeriPRO. Individual labs exhibited a range of agreement from less than a dB to as much as 9.4 dB difference with ANSI and REAT estimates. Factors such as the experience of study participants and test administrators, and the fit-test psychometric tasks are suggested as possible contributors to the observed results. PMID:27786602
2012-01-01
Background Malaria diagnosis has received renewed interest in recent years, associated with the increasing accessibility of accurate diagnosis through the introduction of rapid diagnostic tests and new World Health Organization guidelines recommending parasite-based diagnosis prior to anti-malarial therapy. However, light microscopy, established over 100 years ago and frequently considered the reference standard for clinical diagnosis, has been neglected in control programmes and in the malaria literature and evidence suggests field standards are commonly poor. Microscopy remains the most accessible method for parasite quantitation, for drug efficacy monitoring, and as a reference of assessing other diagnostic tools. This mismatch between quality and need highlights the importance of the establishment of reliable standards and procedures for assessing and assuring quality. This paper describes the development, function and impact of a multi-country microscopy external quality assurance network set up for this purpose in Asia. Methods Surveys were used for key informants and past participants for feedback on the quality assurance programme. Competency scores for each country from 14 participating countries were compiled for analyses using paired sample t-tests. In-depth interviews were conducted with key informants including the programme facilitators and national level microscopists. Results External assessments and limited retraining through a formalized programme based on a reference slide bank has demonstrated an increase in standards of competence of senior microscopists over a relatively short period of time, at a potentially sustainable cost. The network involved in the programme now exceeds 14 countries in the Asia-Pacific, and the methods are extended to other regions. Conclusions While the impact on national programmes varies, it has translated in some instances into a strengthening of national microscopy standards and offers a possibility both for supporting revival of national microcopy programmes, and for the development of globally recognized standards of competency needed both for patient management and field research. PMID:23095668
Perception of Science Standards' Effectiveness and Their Implementation by Science Teachers
NASA Astrophysics Data System (ADS)
Klieger, Aviva; Yakobovitch, Anat
2011-06-01
The introduction of standards into the education system poses numerous challenges and difficulties. As with any change, plans should be made for teachers to understand and implement the standards. This study examined science teachers' perceptions of the effectiveness of the standards for teaching and learning, and the extent and ease/difficulty of implementing science standards in different grades. The research used a mixed methods approach, combining qualitative and quantitative research methods. The research tools were questionnaires that were administered to elementary school science teachers. The majority of the teachers perceived the standards in science as effective for teaching and learning and only a small minority viewed them as restricting their pedagogical autonomy. Differences were found in the extent of implementation of the different standards and between different grades. The teachers perceived a different degree of difficulty in the implementation of the different standards. The standards experienced as easiest to implement were in the field of biology and materials, whereas the standards in earth sciences and the universe and technology were most difficult to implement, and are also those evaluated by the teachers as being implemented to the least extent. Exposure of teachers' perceptions on the effectiveness of standards and the implementation of the standards may aid policymakers in future planning of teachers' professional development for the implementation of standards.
2004-05-01
following digestion using method 3005A. Copper concentrations were verified using atomic absorption spectroscopy/graphite furnace. Each chamber...1995. Ammonia Variation in Sediments: Spatial, Temporal and Method -Related Effects. Environ. Toxicol. Chem. 14:1499-1506. Savage, W.K., F.W...Regulator Approved Methods and Protocols for Conducting Marine and Terrestrial Risk Assessments 1.III.01.k - Improved Field Analytical Sensors
Methods for Maintaining Insect Cell Cultures
Lynn, Dwight E.
2002-01-01
Insect cell cultures are now commonly used in insect physiology, developmental biology, pathology, and molecular biology. As the field has advanced from methods development to a standard procedure, so has the diversity of scientists using the technique. This paper describes methods that are effective for maintaining various insect cell lines. The procedures are differentiated between loosely or non-attached cell strains, attached cell strains, and strongly adherent cell strains. PMID:15455043
Recommendations for evaluation of computational methods
NASA Astrophysics Data System (ADS)
Jain, Ajay N.; Nicholls, Anthony
2008-03-01
The field of computational chemistry, particularly as applied to drug design, has become increasingly important in terms of the practical application of predictive modeling to pharmaceutical research and development. Tools for exploiting protein structures or sets of ligands known to bind particular targets can be used for binding-mode prediction, virtual screening, and prediction of activity. A serious weakness within the field is a lack of standards with respect to quantitative evaluation of methods, data set preparation, and data set sharing. Our goal should be to report new methods or comparative evaluations of methods in a manner that supports decision making for practical applications. Here we propose a modest beginning, with recommendations for requirements on statistical reporting, requirements for data sharing, and best practices for benchmark preparation and usage.
NASA Astrophysics Data System (ADS)
Wijesinghe, Ruchire Eranga; Lee, Seung-Yeol; Kim, Pilun; Jung, Hee-Young; Jeon, Mansik; Kim, Jeehyun
2017-09-01
Seed germination rate differs based on chemical treatments, and nondestructive measurements of germination rate have become an essential requirement in the field of agriculture. Seed scientists and other biologists are interested in optical sensing technologies-based biological discoveries due to nondestructive detection capability. Optical coherence tomography (OCT) has recently emerged as a powerful method for biological and plant material discoveries. We report an extended application of OCT by monitoring the germination rate acceleration of chemically primed seeds. To validate the versatility of the method, Capsicum annum seeds were primed using three chemical compounds: sterile distilled water (SDW), butandiol, and 1-hexadecene. Monitoring was performed using a 1310-nm swept source OCT system. The results confirmed more rapid morphological variations in the seeds treated with 1-hexadecene medium than the seeds treated with SDW and butandiol within 8 consecutive days. In addition, fresh weight measurements (gold standard) of seeds were monitored for 15 days, and the obtained results were correlated with the OCT results. Thus, such a method can be used in various agricultural fields, and OCT shows potential as a rigorous sensing method for selecting the optimal plant growth-promoting chemical compounds rapidly, when compared with the gold standard methods.
NASA Technical Reports Server (NTRS)
Das, A.
1984-01-01
A unified method is presented for deriving the influence functions of moving singularities which determine the field quantities in aerodynamics and aeroacoustics. The moving singularities comprise volume and surface distributions having arbitrary orientations in space and to the trajectory. Hence one generally valid formula for the influence functions which reveal some universal relationships and remarkable properties in the disturbance fields. The derivations used are completely consistent with the physical processes in the propagation field, such that treatment renders new descriptions for some standard concepts. The treatment is uniformly valid for subsonic and supersonic Mach numbers.
2008-07-01
EPA emission standards, the EPA has also specified the measurement methods . According to EPA, the most accurate and precise method of determining ...function of particle size and refractive index . If particle size distributions and refractive indices in diesel exhaust strongly depend on the...to correct the bias of the raw SFTM data and align the data with the values determined by the federal reference method . Thus, to use these methods a
RECOLA2: REcursive Computation of One-Loop Amplitudes 2
NASA Astrophysics Data System (ADS)
Denner, Ansgar; Lang, Jean-Nicolas; Uccirati, Sandro
2018-03-01
We present the Fortran95 program RECOLA2 for the perturbative computation of next-to-leading-order transition amplitudes in the Standard Model of particle physics and extended Higgs sectors. New theories are implemented via model files in the 't Hooft-Feynman gauge in the conventional formulation of quantum field theory and in the Background-Field method. The present version includes model files for Two-Higgs-Doublet Model and the Higgs-Singlet Extension of the Standard Model. We support standard renormalization schemes for the Standard Model as well as many commonly used renormalization schemes in extended Higgs sectors. Within these models the computation of next-to-leading-order polarized amplitudes and squared amplitudes, optionally summed over spin and colour, is fully automated for any process. RECOLA2 allows the computation of colour- and spin-correlated leading-order squared amplitudes that are needed in the dipole subtraction formalism. RECOLA2 is publicly available for download at http://recola.hepforge.org.
Hubble Space Telescope: Wide field and planetary camera instrument handbook. Version 2.1
NASA Technical Reports Server (NTRS)
Griffiths, Richard (Editor)
1990-01-01
An overview is presented of the development and construction of the Wide Field and Planetary Camera (WF/PC). The WF/PC is a duel two dimensional spectrophotometer with rudimentary polarimetric and transmission grating capabilities. The instrument operates from 1150 to 11000 A with a resolution of 0.1 arcsec per pixel or 0.043 arcsec per pixel. Data products and standard calibration methods are briefly summarized.
Novel Texture-based Visualization Methods for High-dimensional Multi-field Data Sets
2013-07-06
project: In standard format showing authors, title, journal, issue, pages, and date, for each category list the following: b) papers published...visual- isation [18]. Novel image acquisition and simulation tech- niques have made is possible to record a large number of co-located data fields...function, structure, anatomical changes, metabolic activity, blood perfusion, and cellular re- modelling. In this paper we investigate texture-based
One-loop topological expansion for spin glasses in the large connectivity limit
NASA Astrophysics Data System (ADS)
Chiara Angelini, Maria; Parisi, Giorgio; Ricci-Tersenghi, Federico
2018-01-01
We apply for the first time a new one-loop topological expansion around the Bethe solution to the spin-glass model with a field in the high connectivity limit, following the methodological scheme proposed in a recent work. The results are completely equivalent to the well-known ones, found by standard field-theoretical expansion around the fully connected model (Bray and Roberts 1980, and following works). However this method has the advantage that the starting point is the original Hamiltonian of the model, with no need to define an associated field theory, nor to know the initial values of the couplings, and the computations have a clear and simple physical meaning. Moreover this new method can also be applied in the case of zero temperature, when the Bethe model has a transition in field, contrary to the fully connected model that is always in the spin-glass phase. Sharing with finite-dimensional model the finite connectivity properties, the Bethe lattice is clearly a better starting point for an expansion with respect to the fully connected model. The present work is a first step towards the generalization of this new expansion to more difficult and interesting cases as the zero-temperature limit, where the expansion could lead to different results with respect to the standard one.
Calibration of GPS based high accuracy speed meter for vehicles
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-02-01
GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2004-01-01
Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisling, K; Zhang, L; Yang, J
Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
NASA Astrophysics Data System (ADS)
Mandic, M.; Stöbener, N.; Smajgl, D.
2017-12-01
For many decades different instrumental methods involving generations of the isotope ratio mass spectrometers with different periphery units for sample preparation, have provided scientifically required high precision, and high throughput of samples for varies application - from geological and hydrological to food and forensic. With this work we introduce automated measurement of δ13C and δ18O from solid carbonate samples, DIC and δ18O of water. We have demonstrated usage of a Thermo Scientific™ Delta Ray™ IRIS with URI Connect on certified reference materials and confirmed the high achievable accuracy and a precision better then <0.1‰ for both δ13C and δ18O, in the laboratory or the field with same precision and throughput of samples. With equilibration method for determination of δ18O in water samples, which we present in this work, achieved repeatability and accuracy are 0.12‰ and 0.68‰ respectively, which fulfill requirements of regulatory methods. The preparation of the samples for carbonate and DIC analysis on the Delta Ray IRIS with URI Connect is similar to the previously mentioned Gas Bench II methods. Samples are put into vials and phosphoric acid is added. The resulting sample-acid chemical reaction releases CO2 gas, which is then introduced into the Delta Ray IRIS via the Variable Volume. Three international standards of carbonate materials (NBS-18, NBS-19 and IAEA-CO-1) were analyzed. NBS-18 and NBS-19 were used as standards for calibration, and IAEA-CO-1 was treated as unknown. For water sample analysis equilibration method with 1% of CO2 in dry air was used. Test measurements and conformation of precision and accuracy of method determination δ18O in water samples were done with three lab standards, namely ANST, OCEAN 2 and HBW. All laboratory standards were previously calibrated with international reference material VSMOW2 and SLAP2 to assure accuracy of the isotopic values. The Principle of Identical Treatment was applied in sample and standard preparation, in measurement procedure, as well as in the evaluation of the results.
Quarks, Symmetries and Strings - a Symposium in Honor of Bunji Sakita's 60th Birthday
NASA Astrophysics Data System (ADS)
Kaku, M.; Jevicki, A.; Kikkawa, K.
1991-04-01
The Table of Contents for the full book PDF is as follows: * Preface * Evening Banquet Speech * I. Quarks and Phenomenology * From the SU(6) Model to Uniqueness in the Standard Model * A Model for Higgs Mechanism in the Standard Model * Quark Mass Generation in QCD * Neutrino Masses in the Standard Model * Solar Neutrino Puzzle, Horizontal Symmetry of Electroweak Interactions and Fermion Mass Hierarchies * State of Chiral Symmetry Breaking at High Temperatures * Approximate |ΔI| = 1/2 Rule from a Perspective of Light-Cone Frame Physics * Positronium (and Some Other Systems) in a Strong Magnetic Field * Bosonic Technicolor and the Flavor Problem * II. Strings * Supersymmetry in String Theory * Collective Field Theory and Schwinger-Dyson Equations in Matrix Models * Non-Perturbative String Theory * The Structure of Non-Perturbative Quantum Gravity in One and Two Dimensions * Noncritical Virasoro Algebra of d < 1 Matrix Model and Quantized String Field * Chaos in Matrix Models ? * On the Non-Commutative Symmetry of Quantum Gravity in Two Dimensions * Matrix Model Formulation of String Field Theory in One Dimension * Geometry of the N = 2 String Theory * Modular Invariance form Gauge Invariance in the Non-Polynomial String Field Theory * Stringy Symmetry and Off-Shell Ward Identities * q-Virasoro Algebra and q-Strings * Self-Tuning Fields and Resonant Correlations in 2d-Gravity * III. Field Theory Methods * Linear Momentum and Angular Momentum in Quaternionic Quantum Mechanics * Some Comments on Real Clifford Algebras * On the Quantum Group p-adics Connection * Gravitational Instantons Revisited * A Generalized BBGKY Hierarchy from the Classical Path-Integral * A Quantum Generated Symmetry: Group-Level Duality in Conformal and Topological Field Theory * Gauge Symmetries in Extended Objects * Hidden BRST Symmetry and Collective Coordinates * Towards Stochastically Quantizing Topological Actions * IV. Statistical Methods * A Brief Summary of the s-Channel Theory of Superconductivity * Neural Networks and Models for the Brain * Relativistic One-Body Equations for Planar Particles with Arbitrary Spin * Chiral Property of Quarks and Hadron Spectrum in Lattice QCD * Scalar Lattice QCD * Semi-Superconductivity of a Charged Anyon Gas * Two-Fermion Theory of Strongly Correlated Electrons and Charge-Spin Separation * Statistical Mechanics and Error-Correcting Codes * Quantum Statistics
Perez, Christina R.; Bonar, Scott A.; Amberg, Jon J.; Ladell, Bridget; Rees, Christopher B.; Stewart, William T.; Gill, Curtis J.; Cantrell, Chris; Robinson, Anthony
2017-01-01
Recently, methods involving examination of environmental DNA (eDNA) have shown promise for characterizing fish species presence and distribution in waterbodies. We evaluated the use of eDNA for standard fish monitoring surveys in a large reservoir. Specifically, we compared the presence, relative abundance, biomass, and relative percent composition of Largemouth Bass Micropterus salmoides and Gizzard Shad Dorosoma cepedianum measured through eDNA methods and established American Fisheries Society standard sampling methods for Theodore Roosevelt Lake, Arizona. Catches at electrofishing and gillnetting sites were compared with eDNA water samples at sites, within spatial strata, and over the entire reservoir. Gizzard Shad were detected at a higher percentage of sites with eDNA methods than with boat electrofishing in both spring and fall. In contrast, spring and fall gillnetting detected Gizzard Shad at more sites than eDNA. Boat electrofishing and gillnetting detected Largemouth Bass at more sites than eDNA; the exception was fall gillnetting, for which the number of sites of Largemouth Bass detection was equal to that for eDNA. We observed no relationship between relative abundance and biomass of Largemouth Bass and Gizzard Shad measured by established methods and eDNA copies at individual sites or lake sections. Reservoirwide catch composition for Largemouth Bass and Gizzard Shad (numbers and total weight [g] of fish) as determined through a combination of gear types (boat electrofishing plus gillnetting) was similar to the proportion of total eDNA copies from each species in spring and fall field sampling. However, no similarity existed between proportions of fish caught via spring and fall boat electrofishing and the proportion of total eDNA copies from each species. Our study suggests that eDNA field sampling protocols, filtration, DNA extraction, primer design, and DNA sequencing methods need further refinement and testing before incorporation into standard fish sampling surveys.
New Cost-Effective Method for Long-Term Groundwater Monitoring Programs
2013-05-01
with a small-volume, gas -tight syringe (< 1 mL) and injected directly into the field-portable GC. Alternatively, the well headspace sample can be...according to manufacturers’ protocols. Isobutylene was used as the calibration standard for the PID. The standard gas mixtures were used for 3-point...monitoring wells are being evaluated: 1) direct headspace sampling, 2) sampling tube with gas permeable membrane, and 3) gas -filled passive vapor
Optical Methods for Identifying Hard Clay Core Samples During Petrophysical Studies
NASA Astrophysics Data System (ADS)
Morev, A. V.; Solovyeva, A. V.; Morev, V. A.
2018-01-01
X-ray phase analysis of the general mineralogical composition of core samples from one of the West Siberian fields was performed. Electronic absorption spectra of the clay core samples with an added indicator were studied. The speed and availability of applying the two methods in petrophysical laboratories during sample preparation for standard and special studies were estimated.
Patricia L. Faulkner; Michele M. Schoeneberger; Kim H. Ludovici
1993-01-01
Foliar tissue was collected from a field study designed to test impacts of atmospheric pollutants on loblolIy pine (Pinus taeda L.) seedlings. Standard enzymatic (ENZ) and high performance liquid chromatography (HPLC) methods were used to analyze the tissue for soluble sugars. A comparison of the methods revealed no significant diffennces in accuracy...
Solution Deposition Methods for Carbon Nanotube Field-Effect Transistors
2009-06-01
authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or approval of the use thereof. Destroy...processed into FETs using standard microelectronics processing techniques. The resulting devices were characterized using a semiconductor parameter...method will help to determine which conditions are useful for producing CNT devices for chemical sensing and electronic applications. 15. SUBJECT TERMS
The purpose of this SOP is to define the appropriate method for completing scannable forms generated by Teleform. The instructions describe methods of form completion and how to indicate that a response is not valid. Scannable Forms are used in the field and laboratory portion o...
The Second NWRA Flare-Forecasting Comparison Workshop: Methods Compared and Methodology
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, G.; the Flare Forecasting Comparison Group
2013-07-01
The Second NWRA Workshop to compare methods of solar flare forecasting was held 2-4 April 2013 in Boulder, CO. This is a follow-on to the First NWRA Workshop on Flare Forecasting Comparison, also known as the ``All-Clear Forecasting Workshop'', held in 2009 jointly with NASA/SRAG and NOAA/SWPC. For this most recent workshop, many researchers who are active in the field participated, and diverse methods were represented in terms of both the characterization of the Sun and the statistical approaches used to create a forecast. A standard dataset was created for this investigation, using data from the Solar Dynamics Observatory/ Helioseismic and Magnetic Imager (SDO/HMI) vector magnetic field HARP series. For each HARP on each day, 6 hours of data were used, allowing for nominal time-series analysis to be included in the forecasts. We present here a summary of the forecasting methods that participated and the standardized dataset that was used. Funding for the workshop and the data analysis was provided by NASA/Living with a Star contract NNH09CE72C and NASA/Guest Investigator contract NNH12CG10C.
Fisher, Michael B.; Mann, Benjamin H.; Cronk, Ryan D.; Shields, Katherine F.; Klug, Tori L.; Ramaswamy, Rohit
2016-01-01
Information and communications technologies (ICTs) such as mobile survey tools (MSTs) can facilitate field-level data collection to drive improvements in national and international development programs. MSTs allow users to gather and transmit field data in real time, standardize data storage and management, automate routine analyses, and visualize data. Dozens of diverse MST options are available, and users may struggle to select suitable options. We developed a systematic MST Evaluation Framework (EF), based on International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) software quality modeling standards, to objectively assess MSTs and assist program implementers in identifying suitable MST options. The EF is applicable to MSTs for a broad variety of applications. We also conducted an MST user survey to elucidate needs and priorities of current MST users. Finally, the EF was used to assess seven MSTs currently used for water and sanitation monitoring, as a validation exercise. The results suggest that the EF is a promising method for evaluating MSTs. PMID:27563916
Fisher, Michael B; Mann, Benjamin H; Cronk, Ryan D; Shields, Katherine F; Klug, Tori L; Ramaswamy, Rohit
2016-08-23
Information and communications technologies (ICTs) such as mobile survey tools (MSTs) can facilitate field-level data collection to drive improvements in national and international development programs. MSTs allow users to gather and transmit field data in real time, standardize data storage and management, automate routine analyses, and visualize data. Dozens of diverse MST options are available, and users may struggle to select suitable options. We developed a systematic MST Evaluation Framework (EF), based on International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) software quality modeling standards, to objectively assess MSTs and assist program implementers in identifying suitable MST options. The EF is applicable to MSTs for a broad variety of applications. We also conducted an MST user survey to elucidate needs and priorities of current MST users. Finally, the EF was used to assess seven MSTs currently used for water and sanitation monitoring, as a validation exercise. The results suggest that the EF is a promising method for evaluating MSTs.
NASA Astrophysics Data System (ADS)
Schultz, A.; Bonner, L. R., IV
2016-12-01
Existing methods to predict Geomagnetically Induced Currents (GICs) in power grids, such as the North American Electric Reliability Corporation standard adopted by the power industry, require explicit knowledge of the electrical resistivity structure of the crust and mantle to solve for ground level electric fields along transmission lines. The current standard is to apply regional 1-D resistivity models to this problem, which facilitates rapid solution of the governing equations. The systematic mapping of continental resistivity structure from projects such as EarthScope reveals several orders of magnitude of lateral variations in resistivity on local, regional and continental scales, resulting in electric field intensifications relative to existing 1-D solutions that can impact GICs to first order. The computational burden on the ground resistivity/GIC problem of coupled 3-D solutions inhibits the prediction of GICs in a timeframe useful to protecting power grids. In this work we reduce the problem to applying a set of filters, recognizing that the magnetotelluric impedance tensors implicitly contain all known information about the resistivity structure beneath a given site, and thus provides the required relationship between electric and magnetic fields at each site. We project real-time magnetic field data from distant magnetic observatories through a robustly calculated multivariate transfer function to locations where magnetotelluric impedance tensors had previously been obtained. This provides a real-time prediction of the magnetic field at each of those points. We then project the predicted magnetic fields through the impedance tensors to obtain predictions of electric fields induced at ground level. Thus, electric field predictions can be generated in real-time for an entire array from real-time observatory data, then interpolated onto points representing a power transmission line contained within the array to produce a combined electric field prediction necessary for GIC prediction along that line. This method produces more accurate predictions of ground electric fields in conductively heterogeneous areas that are not limited by distance from the nearest observatory, while still retaining comparable computational speeds as existing methods.
NASA Astrophysics Data System (ADS)
Lafont, F.; Ribeiro-Palau, R.; Kazazis, D.; Michon, A.; Couturaud, O.; Consejo, C.; Chassagne, T.; Zielinski, M.; Portail, M.; Jouault, B.; Schopfer, F.; Poirier, W.
2015-04-01
Replacing GaAs by graphene to realize more practical quantum Hall resistance standards (QHRS), accurate to within 10-9 in relative value, but operating at lower magnetic fields than 10 T, is an ongoing goal in metrology. To date, the required accuracy has been reported, only few times, in graphene grown on SiC by Si sublimation, under higher magnetic fields. Here, we report on a graphene device grown by chemical vapour deposition on SiC, which demonstrates such accuracies of the Hall resistance from 10 T up to 19 T at 1.4 K. This is explained by a quantum Hall effect with low dissipation, resulting from strongly localized bulk states at the magnetic length scale, over a wide magnetic field range. Our results show that graphene-based QHRS can replace their GaAs counterparts by operating in as-convenient cryomagnetic conditions, but over an extended magnetic field range. They rely on a promising hybrid and scalable growth method and a fabrication process achieving low-electron-density devices.
Parallel heat transport in reversed shear magnetic field configurations
NASA Astrophysics Data System (ADS)
Blazevski, D.; Del-Castillo-Negrete, D.
2012-03-01
Transport in magnetized plasmas is a key problem in controlled fusion, space plasmas, and astrophysics. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), χ, and the perpendicular, χ, conductivities (χ/χ may exceed 10^10 in fusion plasmas); (ii) Magnetic field lines chaos; and (iii) Nonlocal parallel transport. We have recently developed a Lagrangian Green's function (LG) method to solve the local and non-local parallel (χ/χ->∞) transport equation applicable to integrable and chaotic magnetic fields. footnotetext D. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011); D. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, APS Invited paper, submitted (2011). The proposed method overcomes many of the difficulties faced by standard finite different methods related to the three issues mentioned above. Here we apply the LG method to study transport in reversed shear configurations. We focus on the following problems: (i) separatrix reconnection of magnetic islands and transport; (ii) robustness of shearless, q'=0, transport barriers; (iii) leaky barriers and shearless Cantori.
Method for improving performance of highly stressed electrical insulating structures
Wilson, Michael J.; Goerz, David A.
2002-01-01
Removing the electrical field from the internal volume of high-voltage structures; e.g., bushings, connectors, capacitors, and cables. The electrical field is removed from inherently weak regions of the interconnect, such as between the center conductor and the solid dielectric, and places it in the primary insulation. This is accomplished by providing a conductive surface on the inside surface of the principal solid dielectric insulator surrounding the center conductor and connects the center conductor to this conductive surface. The advantage of removing the electric fields from the weaker dielectric region to a stronger area improves reliability, increases component life and operating levels, reduces noise and losses, and allows for a smaller compact design. This electric field control approach is currently possible on many existing products at a modest cost. Several techniques are available to provide the level of electric field control needed. Choosing the optimum technique depends on material, size, and surface accessibility. The simplest deposition method uses a standard electroless plating technique, but other metalization techniques include vapor and energetic deposition, plasma spraying, conductive painting, and other controlled coating methods.
Parra-Robles, Juan; Cross, Albert R; Santyr, Giles E
2005-05-01
Hyperpolarized noble gases (HNGs) provide exciting possibilities for MR imaging at ultra-low magnetic field strengths (<0.15 T) due to the extremely high polarizations available from optical pumping. The fringe field of many superconductive magnets used in clinical MR imaging can provide a stable magnetic field for this purpose. In addition to offering the benefit of HNG MR imaging alongside conventional high field proton MRI, this approach offers the other useful advantage of providing different field strengths at different distances from the magnet. However, the extremely strong field gradients associated with the fringe field present a major challenge for imaging since impractically high active shim currents would be required to achieve the necessary homogeneity. In this work, a simple passive shimming method based on the placement of a small number of ferromagnetic pieces is proposed to reduce the fringe field inhomogeneities to a level that can be corrected using standard active shims. The method explicitly takes into account the strong variations of the field over the volume of the ferromagnetic pieces used to shim. The method is used to obtain spectra in the fringe field of a high-field (1.89 T) superconducting magnet from hyperpolarized 129Xe gas samples at two different ultra-low field strengths (8.5 and 17 mT). The linewidths of spectra measured from imaging phantoms (30 Hz) indicate a homogeneity sufficient for MRI of the rat lung.
2013-01-01
The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618
Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.
Liu, Da; Li, Jianxun
2016-12-16
Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.
Direct folding simulation of helical proteins using an effective polarizable bond force field.
Duan, Lili; Zhu, Tong; Ji, Changge; Zhang, Qinggang; Zhang, John Z H
2017-06-14
We report a direct folding study of seven helical proteins (, Trpcage, , C34, N36, , ) ranging from 17 to 53 amino acids through standard molecular dynamics simulations using a recently developed polarizable force field-Effective Polarizable Bond (EPB) method. The backbone RMSDs, radius of gyrations, native contacts and native helix content are in good agreement with the experimental results. Cluster analysis has also verified that these folded structures with the highest population are in good agreement with their corresponding native structures for these proteins. In addition, the free energy landscape of seven proteins in the two dimensional space comprised of RMSD and radius of gyration proved that these folded structures are indeed of the lowest energy conformations. However, when the corresponding simulations were performed using the standard (nonpolarizable) AMBER force fields, no stable folded structures were observed for these proteins. Comparison of the simulation results based on a polarizable EPB force field and a nonpolarizable AMBER force field clearly demonstrates the importance of polarization in the folding of stable helical structures.
Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L
2016-01-01
We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess → affine → B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ . Briefly, in sliding organ , we segmented the organ, registered the organ and body volumes separately and combined results. Though s liding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard . Mask generated comparable results with sliding organ and allowed a semi-automatic process.
D'Costa, Susan; Blouin, Veronique; Broucque, Frederic; Penaud-Budloo, Magalie; François, Achille; Perez, Irene C; Le Bec, Christine; Moullier, Philippe; Snyder, Richard O; Ayuso, Eduard
2016-01-01
Clinical trials using recombinant adeno-associated virus (rAAV) vectors have demonstrated efficacy and a good safety profile. Although the field is advancing quickly, vector analytics and harmonization of dosage units are still a limitation for commercialization. AAV reference standard materials (RSMs) can help ensure product safety by controlling the consistency of assays used to characterize rAAV stocks. The most widely utilized unit of vector dosing is based on the encapsidated vector genome. Quantitative polymerase chain reaction (qPCR) is now the most common method to titer vector genomes (vg); however, significant inter- and intralaboratory variations have been documented using this technique. Here, RSMs and rAAV stocks were titered on the basis of an inverted terminal repeats (ITRs) sequence-specific qPCR and we found an artificial increase in vg titers using a widely utilized approach. The PCR error was introduced by using single-cut linearized plasmid as the standard curve. This bias was eliminated using plasmid standards linearized just outside the ITR region on each end to facilitate the melting of the palindromic ITR sequences during PCR. This new "Free-ITR" qPCR delivers vg titers that are consistent with titers obtained with transgene-specific qPCR and could be used to normalize in-house product-specific AAV vector standards and controls to the rAAV RSMs. The free-ITR method, including well-characterized controls, will help to calibrate doses to compare preclinical and clinical data in the field.
NASA Astrophysics Data System (ADS)
Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L.
2016-03-01
We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess --> affine --> B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ. Briefly, in sliding organ, we segmented the organ, registered the organ and body volumes separately and combined results. Though sliding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard. Mask generated comparable results with sliding organ and allowed a semi-automatic process.
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, Samuel F.; Romero-Gomez, Pedro D. J.; Richmond, Marshall C.
Standards provide recommendations for the best practices in the installation of current meters for measuring fluid flow in closed conduits. These include PTC-18 and IEC-41 . Both of these standards refer to the requirements of the ISO Standard 3354 for cases where the velocity distribution is assumed to be regular and the flow steady. Due to the nature of the short converging intakes of Kaplan hydroturbines, these assumptions may be invalid if current meters are intended to be used to characterize turbine flows. In this study, we examine a combination of measurement guidelines from both ISO standards by means ofmore » virtual current meters (VCM) set up over a simulated hydroturbine flow field. To this purpose, a computational fluid dynamics (CFD) model was developed to model the velocity field of a short converging intake of the Ice Harbor Dam on the Snake River, in the State of Washington. The detailed geometry and resulting wake of the submersible traveling screen (STS) at the first gate slot was of particular interest in the development of the CFD model using a detached eddy simulation (DES) turbulence solution. An array of virtual point velocity measurements were extracted from the resulting velocity field to simulate VCM at two virtual measurement (VM) locations at different distances downstream of the STS. The discharge through each bay was calculated from the VM using the graphical integration solution to the velocity-area method. This method of representing practical velocimetry techniques in a numerical flow field has been successfully used in a range of marine and conventional hydropower applications. A sensitivity analysis was performed to observe the effect of the VCM array resolution on the discharge error. The downstream VM section required 11–33% less VCM in the array than the upstream VM location to achieve a given discharge error. In general, more instruments were required to quantify the discharge at high levels of accuracy when the STS was introduced because of the increased spatial variability of the flow velocity.« less
Lee, Eun Gyung; Nelson, John H.; Kashon, Michael L.; Harper, Martin
2015-01-01
A Japanese round-robin study revealed that analysts who used a dark-medium (DM) objective lens reported higher fiber counts from American Industrial Hygiene Association (AIHA) Proficiency Analytical Testing (PAT) chrysotile samples than those using a standard objective lens, but the cause of this difference was not investigated at that time. The purpose of this study is to determine any major source of this difference by performing two sets of round-robin studies. For the first round-robin study, 15 AIHA PAT samples (five each of chrysotile and amosite generated by water-suspended method, and five chrysotile generated by aerosolization method) were prepared with relocatable cover slips and examined by nine laboratories. A second round-robin study was then performed with six chrysotile field sample slides by six out of nine laboratories who participated in the first round-robin study. In addition, two phase-shift test slides to check analysts’ visibility and an eight-form diatom test plate to compare resolution between the two objectives were examined. For the AIHA PAT chrysotile reference slides, use of the DM objective resulted in consistently higher fiber counts (1.45 times for all data) than the standard objective (P-value < 0.05), regardless of the filter generation (water-suspension or aerosol) method. For the AIHA PAT amosite reference and chrysotile field sample slides, the fiber counts between the two objectives were not significantly different. No statistically significant differences were observed in the visibility of blocks of the test slides between the two objectives. Also, the DM and standard objectives showed no pattern of differences in viewing the fine lines and/or dots of each species images on the eight-form diatom test plate. Among various potential factors that might affect the analysts’ performance of fiber counts, this study supports the greater contrast caused by the different phase plate absorptions as the main cause of high counts for the AIHA PAT chrysotile slides using the DM objective. The comparison of fiber count ratios (DM/standard) between the AIHA PAT chrysotile samples and chrysotile field samples indicates that there is a fraction of fibers in the PAT samples approaching the theoretical limit of visibility of the phase-contrast microscope with 3-degree phase-shift. These fibers become more clearly visible through the greater contrast from the phase plate absorption of the DM objective. However, as such fibers are not present in field samples, no difference in counts between the two objectives was observed in this study. The DM objective, therefore, could be allowed for routine fiber counting as it will maintain continuity with risk assessments based on earlier phase-contrast microscopy fiber counts from field samples. Published standard methods would need to be modified to allow a higher aperture specification for the objective. PMID:25737333
2014-01-01
Background Since the global standards for postgraduate medical education (PGME) were published in January 2003, they have gained worldwide attention. The current state of residency training programs in medical-school-affiliated hospitals throughout China was assessed in this study. Methods Based on the internationally recognized global standards for PGME, residents undergoing residency training at that time and the relevant residency training instructors and management personnel from 15 medical-school-affiliated hospitals throughout China were recruited and surveyed regarding the current state of residency training programs. A total of 938 questionnaire surveys were distributed between June 30, 2006 and July 30, 2006; of 892 surveys collected, 841 were valid. Results For six items, the total proportions of “basically meets standards” and “completely meets standards” were <70% for the basic standards. These items were identified in the fields of “training settings and educational resources”, “evaluation of training process”, and “trainees”. In all fields other than “continuous updates”, the average scores of the western regions were significantly lower than those of the eastern regions for both the basic and target standards. Specifically, the average scores for the basic standards on as many as 25 of the 38 items in the nine fields were significantly lower in the western regions. There were significant differences in the basic standards scores on 13 of the 38 items among trainees, instructors, and managers. Conclusions The residency training programs have achieved satisfactory outcomes in the hospitals affiliated with various medical schools in China. However, overall, the programs remain inadequate in certain areas. For the governments, organizations, and institutions responsible for PGME, such global standards for PGME are a very useful self-assessment tool and can help identify problems, promote reform, and ultimately standardize PGME. PMID:24885865
Mousa, Mohammad F.; Cubbidge, Robert P.; Al-Mansouri, Fatima
2014-01-01
Purpose Multifocal visual evoked potential (mfVEP) is a newly introduced method used for objective visual field assessment. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard automated perimetry (SAP) visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. Methods Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey field analyzer (HFA) test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the hemifield sector analysis (HSA) protocol. Analysis of the HFA was done using the standard grading system. Results Analysis of mfVEP results showed that there was a statistically significant difference between the three groups in the mean signal to noise ratio (ANOVA test, p < 0.001 with a 95% confidence interval). The difference between superior and inferior hemispheres in all subjects were statistically significant in the glaucoma patient group in all 11 sectors (t-test, p < 0.001), partially significant in 5 / 11 (t-test, p < 0.01), and no statistical difference in most sectors of the normal group (1 / 11 sectors was significant, t-test, p < 0.9). Sensitivity and specificity of the HSA protocol in detecting glaucoma was 97% and 86%, respectively, and for glaucoma suspect patients the values were 89% and 79%, respectively. Conclusions The new HSA protocol used in the mfVEP testing can be applied to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss. PMID:24511212
Laleian, Artin; Valocchi, Albert J.; Werth, Charles J.
2015-11-24
Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous dragmore » with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. Additionally, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.« less
In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements
Oberg, K.; ,
2002-01-01
A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Coulomb-free and Coulomb-distorted recolliding quantum orbits in photoelectron holography
NASA Astrophysics Data System (ADS)
Maxwell, A. S.; Figueira de Morisson Faria, C.
2018-06-01
We perform a detailed analysis of the different types of orbits in the Coulomb quantum orbit strong-field approximation (CQSFA), ranging from direct to those undergoing hard collisions. We show that some of them exhibit clear counterparts in the standard formulations of the strong-field approximation for direct and rescattered above-threshold ionization, and show that the standard orbit classification commonly used in Coulomb-corrected models is over-simplified. We identify several types of rescattered orbits, such as those responsible for the low-energy structures reported in the literature, and determine the momentum regions in which they occur. We also find formerly overlooked interference patterns caused by backscattered Coulomb-corrected orbits and assess their effect on photoelectron angular distributions. These orbits improve the agreement of photoelectron angular distributions computed with the CQSFA with the outcome of ab initio methods for high energy phtotoelectrons perpendicular to the field polarization axis.
NASA Astrophysics Data System (ADS)
Bolduc, A.; Gauthier, P.-A.; Berry, A.
2017-12-01
While perceptual evaluation and sound quality testing with jury are now recognized as essential parts of acoustical product development, they are rarely implemented with spatial sound field reproduction. Instead, monophonic, stereophonic or binaural presentations are used. This paper investigates the workability and interest of a method to use complete vibroacoustic engineering models for auralization based on 2.5D Wave Field Synthesis (WFS). This method is proposed in order that spatial characteristics such as directivity patterns and direction-of-arrival are part of the reproduced sound field while preserving the model complete formulation that coherently combines frequency and spatial responses. Modifications to the standard 2.5D WFS operators are proposed for extended primary sources, affecting the reference line definition and compensating for out-of-plane elementary primary sources. Reported simulations and experiments of reproductions of two physically-accurate vibroacoustic models of thin plates show that the proposed method allows for an effective reproduction in the horizontal plane: Spatial and frequency domains features are recreated. Application of the method to the sound rendering of a virtual transmission loss measurement setup shows the potential of the method for use in virtual acoustical prototyping for jury testing.
This compilation of field collection standard operating procedures (SOPs) was assembled for the U.S. Environmental Protection Agency’s (EPA) Pilot Study add-on to the Green Housing Study (GHS). A detailed description of this add-on study can be found in the peer reviewed research...
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
Field-programmable lab-on-a-chip based on microelectrode dot array architecture.
Wang, Gary; Teng, Daniel; Lai, Yi-Tse; Lu, Yi-Wen; Ho, Yingchieh; Lee, Chen-Yi
2014-09-01
The fundamentals of electrowetting-on-dielectric (EWOD) digital microfluidics are very strong: advantageous capability in the manipulation of fluids, small test volumes, precise dynamic control and detection, and microscale systems. These advantages are very important for future biochip developments, but the development of EWOD microfluidics has been hindered by the absence of: integrated detector technology, standard commercial components, on-chip sample preparation, standard manufacturing technology and end-to-end system integration. A field-programmable lab-on-a-chip (FPLOC) system based on microelectrode dot array (MEDA) architecture is presented in this research. The MEDA architecture proposes a standard EWOD microfluidic component called 'microelectrode cell', which can be dynamically configured into microfluidic components to perform microfluidic operations of the biochip. A proof-of-concept prototype FPLOC, containing a 30 × 30 MEDA, was developed by using generic integrated circuits computer aided design tools, and it was manufactured with standard low-voltage complementary metal-oxide-semiconductor technology, which allows smooth on-chip integration of microfluidics and microelectronics. By integrating 900 droplet detection circuits into microelectrode cells, the FPLOC has achieved large-scale integration of microfluidics and microelectronics. Compared to the full-custom and bottom-up design methods, the FPLOC provides hierarchical top-down design approach, field-programmability and dynamic manipulations of droplets for advanced microfluidic operations.
Grimes, D.J.; Marranzino, A.P.
1968-01-01
Two spectrographic methods are used in mobile field laboratories of the U. S. Geological Survey. In the direct-current arc method, the ground sample is mixed with graphite powder, packed into an electrode crater, and burned to completion. Thirty elements are determined. In the spark method, the sample, ground to pass a 150-mesh screen, is digested in hydrofluoric acid followed by evaporation to dryness and dissolution in aqua regia. The solution is fed into the spark gap by means of a rotating-disk electrode arrangement and is excited with an alternating-current spark discharge. Fourteen elements are determined. In both techniques, light is recorded on Spectrum Analysis No. 1, 35-millimeter film, and the spectra are compared visually with those of standard films.
Rapid assessment of rice seed availability for wildlife in harvested fields
Halstead, B.J.; Miller, M.R.; Casazza, Michael L.; Coates, P.S.; Farinha, M.A.; Benjamin, Gustafson K.; Yee, J.L.; Fleskes, J.P.
2011-01-01
Rice seed remaining in commercial fields after harvest (waste rice) is a critical food resource for wintering waterfowl in rice-growing regions of North America. Accurate and precise estimates of the seed mass density of waste rice are essential for planning waterfowl wintering habitat extents and management. In the Sacramento Valley of California, USA, the existing method for obtaining estimates of availability of waste rice in harvested fields produces relatively precise estimates, but the labor-, time-, and machineryintensive process is not practical for routine assessments needed to examine long-term trends in waste rice availability. We tested several experimental methods designed to rapidly derive estimates that would not be burdened with disadvantages of the existing method. We first conducted a simulation study of the efficiency of each method and then conducted field tests. For each approach, methods did not vary in root mean squared error, although some methods did exhibit bias for both simulations and field tests. Methods also varied substantially in the time to conduct each sample and in the number of samples required to detect a standard trend. Overall, modified line-intercept methods performed well for estimating the density of rice seeds. Waste rice in the straw, although not measured directly, can be accounted for by a positive relationship with density of rice on the ground. Rapid assessment of food availability is a useful tool to help waterfowl managers establish and implement wetland restoration and agricultural habitat-enhancement goals for wintering waterfowl. ?? 2011 The Wildlife Society.
NASA Astrophysics Data System (ADS)
Sadowski, T.; Kneć, M.
2016-04-01
Fatigue tests were conducted since more than two hundred years ago. Despite this long period, as fatigue phenomena are very complex, assessment of fatigue response of standard materials or composites still requires a long time. Quite precise way to estimate fatigue parameters is to test at least 30 standardized specimens for the analysed material and further statistical post processing is required. In case of structural elements analysis like hybrid joints (Figure 1), the situation is much more complex as more factors influence the fatigue load capacity due to much more complicated structure of the joint in comparison to standard materials specimen, i.e. occurrence of: welded hot spots or rivets, adhesive layers, local notches creating the stress concentrations, etc. In order to shorten testing time some rapid methods are known: Locati's method [1] - step by step load increments up to failure, Prot's method [2] - constant increase of the load amplitude up to failure; Lehr's method [2] - seeking for the point during regular fatigue loading when an increase of temperature or strains become non-linear. The present article proposes new method of the fatigue response assessment - combination of the Locati's and Lehr's method.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
NASA Astrophysics Data System (ADS)
Lehman, Landon; Martin, Adam
2016-02-01
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.
Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier
2013-01-01
To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herchko, S; Ding, G
2016-06-15
Purpose: To develop an accurate, straightforward, and user-independent method for performing light versus radiation field coincidence quality assurance utilizing EPID images, a simple phantom made of readily-accessible materials, and a free software program. Methods: A simple phantom consisting of a blocking tray, graph paper, and high-density wire was constructed. The phantom was used to accurately set the size of a desired light field and imaged on the electronic portal imaging device (EPID). A macro written for use in ImageJ, a free image processing software, was then use to determine the radiation field size utilizing the high density wires on themore » phantom for a pixel to distance calibration. The macro also performs an analysis on the measured radiation field utilizing the tolerances recommended in the AAPM Task Group #142. To verify the accuracy of this method, radiochromic film was used to qualitatively demonstrate agreement between the film and EPID results, and an additional ImageJ macro was used to quantitatively compare the radiation field sizes measured both with the EPID and film images. Results: The results of this technique were benchmarked against film measurements, which have been the gold standard for testing light versus radiation field coincidence. The agreement between this method and film measurements were within 0.5 mm. Conclusion: Due to the operator dependency associated with tracing light fields and measuring radiation fields by hand when using film, this method allows for a more accurate comparison between the light and radiation fields with minimal operator dependency. Removing the need for radiographic or radiochromic film also eliminates a reoccurring cost and increases procedural efficiency.« less
Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty
NASA Astrophysics Data System (ADS)
Brumble, K. C.
2012-12-01
What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolthaus, J; Asselen, B van; Woodings, S
2016-06-15
Purpose: With an MR-linac, radiation is delivered in the presence of a magnetic field. Modifications in the codes of practice (CoPs) for reference dosimetry are required to incorporate the effect of the magnetic field. Methods: In most CoPs the absorbed dose is determined using the well-known kQ formalism as the product of the calibration coefficient, the corrected electrometer reading and kQ, to account for the difference in beam quality. To keep a similar formalism a single correction factor is introduced which replaces kQ, and which corrects for beam quality and B-field, kQ,B. In this study we propose a method tomore » determine kQ,B under reference conditions in the MRLinac without using a primary standard, as the product of:- the ratio between detector readings without and with B-field (kB),- the ratio between doses in the point of measurement with and without B-field (rho),- kQ in the absence of the B-field in the MRLinac beam (kQmrl0,Q0),The ratio of the readings, which covers the change in detector reading due to the different electron trajectories in the detector, was measured with a waterproof ionization chamber (IBA-FC65g) in a water phantom in the MRLinac without and with B-field. The change in dose-to-water in the point of measurement due to the B-field was determined with a Monte Carlo based TPS. Results: For the presented approach, the measured ratio of readings is 0.956, the calculated ratio of doses in the point of measurement is 0.995. Based on TPR20,10 measurements kQ was calculated as 0.989 using NCS-18. This yields a value of 0.9408 for kQ,B. Conclusion: The presented approach to determine kQ,B agrees with a method based on primary standards within 0.4% with an uncertainty of 1% (1 std.uncert). It differs from a similar approach using a PMMA-phantom and an NE2571 chamber with 1.3%.« less
Visual Field Defects and Retinal Ganglion Cell Losses in Human Glaucoma Patients
Harwerth, Ronald S.; Quigley, Harry A.
2007-01-01
Objective The depth of visual field defects are correlated with retinal ganglion cell densities in experimental glaucoma. This study was to determine whether a similar structure-function relationship holds for human glaucoma. Methods The study was based on retinal ganglion cell densities and visual thresholds of patients with documented glaucoma (Kerrigan-Baumrind, et al.) The data were analyzed by a model that predicted ganglion cell densities from standard clinical perimetry, which were then compared to histologic cell counts. Results The model, without free parameters, produced accurate and relatively precise quantification of ganglion cell densities associated with visual field defects. For 437 sets of data, the unity correlation for predicted vs. measured cell densities had a coefficient of determination of 0.39. The mean absolute deviation of the predicted vs. measured values was 2.59 dB, the mean and SD of the distribution of residual errors of prediction was -0.26 ± 3.22 dB. Conclusions Visual field defects by standard clinical perimetry are proportional to neural losses caused by glaucoma. Clinical Relevance The evidence for quantitative structure-function relationships provides a scientific basis of interpreting glaucomatous neuropathy from visual thresholds and supports the application of standard perimetry to establish the stage of the disease. PMID:16769839
Fight the power: the limits of empiricism and the costs of positivistic rigor.
Indick, William
2002-01-01
A summary of the influence of positivistic philosophy and empiricism on the field of psychology is followed by a critique of the empirical method. The dialectic process is advocated as an alternative method of inquiry. The main advantage of the dialectic method is that it is open to any logical argument, including empirical hypotheses, but unlike empiricism, it does not automatically reject arguments that are not based on observable data. Evolutionary and moral psychology are discussed as examples of important fields of study that could benefit from types of arguments that frequently do not conform to the empirical standards of systematic observation and falsifiability of hypotheses. A dialectic method is shown to be a suitable perspective for those fields of research, because it allows for logical arguments that are not empirical and because it fosters a functionalist perspective, which is indispensable for both evolutionary and moral theories. It is suggested that all psychologists may gain from adopting a dialectic approach, rather than restricting themselves to empirical arguments alone.
Pancreatic cancer study based on full field OCT and dynamic full field OCT (Conference Presentation)
NASA Astrophysics Data System (ADS)
Apelian, Clement; Camus, Marine; Prat, Frederic; Boccara, A. Claude
2017-02-01
Pancreatic cancer is one of the most feared cancer types due to high death rates and the difficulty to perform surgery. This cancer outcome could benefit from recent technological developments for diagnosis. We used a combination of standard Full Field OCT and Dynamic Full Field OCT to capture both morphological features and metabolic functions of rodents pancreas in normal and cancerous conditions with and without chemotherapy. Results were compared to histology to evaluate the performances and the specificities of the method. The comparison highlighted the importance of a number of endogenous markers like immune cells, fibrous development, architecture and more.
Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm.
Cooley, Clarissa Zimmerman; Haskell, Melissa W; Cauley, Stephen F; Sappo, Charlotte; Lapierre, Cristen D; Ha, Christopher G; Stockmann, Jason P; Wald, Lawrence L
2018-01-01
Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B 0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.
Recommended methods for monitoring change in bird populations by counting and capture of migrants
David J. T. Hussell; C. John Ralph
2005-01-01
Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Johnson, B. Carol; Yoon, Howard W.; Bruce, Sally S.; Shaw, Ping-Shine; Thompson, Ambler; Hooker, Stanford B.; Barnes, Robert A.; Eplee, Robert E., Jr.;
1999-01-01
This report documents the fifth Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Intercalibration Round-Robin Experiment (SIRREX-5), which was held at the National Institute of Standards and Technology (NIST) on 23-30 July 1996. The agenda for SIRREX-5 was established based on recommendations made during SIRREX-4. For the first time in a SIRREX activity, instrument intercomparisons were performed at field sites, which were near NIST. The goals of SIRREX-5 were to continue the emphasis on training and the implementation of standard measurement practices, investigate the calibration methods and measurement chains in use by the oceanographic community, provide opportunities for discussion, and intercompare selected instruments. As at SIRREX-4, the day was divided between morning lectures and afternoon laboratory exercises. A set of core laboratory sessions were performed: 1) in-water radiant flux measurements; 2) in-air radiant flux measurements; 3) spectral radiance responsivity measurements using the plaque method; 4) device calibration or stability monitoring with portable field sources; and 5) various ancillary exercises designed to illustrate radiometric concepts. Before, during, and after SIRREX-5, NIST calibrated the SIRREX-5 participating radiometers for radiance and irradiance responsivity. The Facility for Automated Spectroradiometric Calibrations (FASCAL) was scheduled for spectral irradiance calibrations for standard lamps during SIRREX-5. Three lamps from the SeaWiFS community were submitted and two were calibrated.
Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model
Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...
2015-10-30
We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less
Evaluation of Visual Field Progression in Glaucoma: Quasar Regression Program and Event Analysis.
Díaz-Alemán, Valentín T; González-Hernández, Marta; Perera-Sanz, Daniel; Armas-Domínguez, Karintia
2016-01-01
To determine the sensitivity, specificity and agreement between the Quasar program, glaucoma progression analysis (GPA II) event analysis and expert opinion in the detection of glaucomatous progression. The Quasar program is based on linear regression analysis of both mean defect (MD) and pattern standard deviation (PSD). Each series of visual fields was evaluated by three methods; Quasar, GPA II and four experts. The sensitivity, specificity and agreement (kappa) for each method was calculated, using expert opinion as the reference standard. The study included 439 SITA Standard visual fields of 56 eyes of 42 patients, with a mean of 7.8 ± 0.8 visual fields per eye. When suspected cases of progression were considered stable, sensitivity and specificity of Quasar, GPA II and the experts were 86.6% and 70.7%, 26.6% and 95.1%, and 86.6% and 92.6% respectively. When suspected cases of progression were considered as progressing, sensitivity and specificity of Quasar, GPA II and the experts were 79.1% and 81.2%, 45.8% and 90.6%, and 85.4% and 90.6% respectively. The agreement between Quasar and GPA II when suspected cases were considered stable or progressing was 0.03 and 0.28 respectively. The degree of agreement between Quasar and the experts when suspected cases were considered stable or progressing was 0.472 and 0.507. The degree of agreement between GPA II and the experts when suspected cases were considered stable or progressing was 0.262 and 0.342. The combination of MD and PSD regression analysis in the Quasar program showed better agreement with the experts and higher sensitivity than GPA II.
Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.
We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less
CTEPP STANDARD OPERATING PROCEDURE FOR CONDUCTING STAFF AND PARTICIPANT TRAINING (SOP-2.27)
This SOP describes the method to train project staff and participants to collect various field samples and questionnaire data for the study. The training plan consists of two separate components: project staff training and participant training. Before project activities begin,...
An "Environmental Issues in Agronomy" Course.
ERIC Educational Resources Information Center
Barbarick, K. A.
1992-01-01
Describes and evaluates the format and grading procedure of an Environmental Agronomy course offered at Colorado State University. Teaching methods include videotape use, field trips, and lectures addressing topics such as integrated pest management, land application of sewage sludge, pesticide degradation, and organic farming. Standard course…
STANDARDIZED AUTOMATED AND MANUAL METHODS TO SPECIATE MERCURY: FIELD AND LABORATORY STUDIES
The urban atmosphere contains a large number of air pollutants including mercury. Atmospheric mercury is predominantly present in the elemental form (Hg0). However emissions from industrial activities (e.g. incinerators, fossil fuel combustion sources and others) emit other f...
But Is It Nutritious? Computer Analysis Creates Healthier Meals.
ERIC Educational Resources Information Center
Corrigan, Kathleen A.; Aumann, Margaret B.
1993-01-01
A computerized menu-planning method, "Nutrient Standard Menu Planning" (NSMP), uses today's technology to create healthier menus. Field tested in 20 California school districts, the advantages of NSMP are cost effectiveness, increased flexibility, greater productivity, improved public relations, improved finances, and improved student…
Retrieving Storm Electric Fields from Aircraft Field Mill Data. Part 1; Theory
NASA Technical Reports Server (NTRS)
Koshak, W. J.
2006-01-01
It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers) and also helps improve absolute calibration. Additionally, this paper introduces an alternate way of performing the absolute calibration of an aircraft that has some benefits over conventional analyses. It is accomplished by using the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part I: Theory
NASA Technical Reports Server (NTRS)
Koshak, W. J.
2005-01-01
It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It also allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers). Additionally, this paper introduces a novel way of performing the absolute calibration of an aircraft that has several benefits over conventional analyses. In the new approach, absolute calibration is completed by inspecting the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.
NASA Astrophysics Data System (ADS)
Carney, G. D.; Adler-Golden, S. M.; Lesseski, D. C.
1986-04-01
This paper reports (1) improved values for low-lying vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) calculated using the variational method and Simons-Parr-Finlan (1973) representations of the Carney-Porter (1976) and Dykstra-Swope (1979) ab initio H3(+) potential energy surfaces, (2) quartic normal coordinate force fields for isotopic H3(+) molecules, (3) comparisons of variational and second-order perturbation theory, and (4) convergence properties of the Lai-Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) for these potential surfaces are 6.9 (Carney-Porter) and 1.2/cm (Dykstra-Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10/cm for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed 't' coordinate Hamiltonian for these molecules, except in the case of H2D(+).
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-09-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-04-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Farwell, Lawrence A; Richardson, Drew C; Richardson, Graham M
2013-08-01
Brain fingerprinting detects concealed information stored in the brain by measuring brainwave responses. We compared P300 and P300-MERMER event-related brain potentials for error rate/accuracy and statistical confidence in four field/real-life studies. 76 tests detected presence or absence of information regarding (1) real-life events including felony crimes; (2) real crimes with substantial consequences (either a judicial outcome, i.e., evidence admitted in court, or a $100,000 reward for beating the test); (3) knowledge unique to FBI agents; and (4) knowledge unique to explosives (EOD/IED) experts. With both P300 and P300-MERMER, error rate was 0 %: determinations were 100 % accurate, no false negatives or false positives; also no indeterminates. Countermeasures had no effect. Median statistical confidence for determinations was 99.9 % with P300-MERMER and 99.6 % with P300. Brain fingerprinting methods and scientific standards for laboratory and field applications are discussed. Major differences in methods that produce different results are identified. Markedly different methods in other studies have produced over 10 times higher error rates and markedly lower statistical confidences than those of these, our previous studies, and independent replications. Data support the hypothesis that accuracy, reliability, and validity depend on following the brain fingerprinting scientific standards outlined herein.
NASA Astrophysics Data System (ADS)
Rolla, L. Barrera; Rice, H. J.
2006-09-01
In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.
NASA Astrophysics Data System (ADS)
Leuenberger, Daiana; Balslev-Harder, David; Braban, Christine F.; Ebert, Volker; Ferracci, Valerio; Gieseking, Bjoern; Hieta, Tuomas; Martin, Nicholas A.; Pascale, Céline; Pogány, Andrea; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard
2016-04-01
Measuring ammonia in ambient air is a sensitive and priority issue due to its harmful effects on human health and ecosystems. In addition to its acidifying effect on natural waters and soils and to the additional nitrogen input to ecosystems, ammonia is an important precursor for secondary aerosol formation in the atmosphere. The European Directive 2001/81/EC on "National Emission Ceilings for Certain Atmospheric Pollutants (NEC)" regulates ammonia emissions in the member states. However, there is a lack of regulation regarding certified reference material (CRM), applicable analytical methods, measurement uncertainty, quality assurance and quality control (QC/QA) procedures as well as in the infrastructure to attain metrological traceability. As shown in a key comparison in 2007, there are even discrepancies between reference materials provided by European National Metrology Institutes (NMIs) at amount fraction levels up to three orders of magnitude higher than ambient air levels. MetNH3 (Metrology for ammonia in ambient air), a three-year project that started in June 2014 in the framework of the European Metrology Research Programme (EMRP), aims to reduce the gap between requirements set by the European emission regulations and state-of-the-art of analytical methods and reference materials. The overarching objective of the JRP is to achieve metrological traceability for ammonia measurements in ambient air from primary certified reference material CRM and instrumental standards to the field level. This requires the successful completion of the three main goals, which have been assigned to three technical work packages: To develop improved reference gas mixtures by static and dynamic gravimetric generation methods Realisation and characterisation of traceable preparative calibration standards (in pressurised cylinders as well as mobile generators) of ammonia amount fractions similar to those in ambient air based on existing methods for other reactive analytes. The aimed uncertainty is < 1 % for static mixtures at the 10 to 100 μmol/mol level, and < 3 % for portable dynamic generators in the 0 to 500 nmol/mol amount fraction range. Special emphasis is put on the minimisation of adsorption losses. To develop and characterise laser based optical spectrometric standards Evaluation and characterisation of the applicability of a newly developed open-path as well as of existing extractive measurement techniques as optical transfer standards according to metrological standards. To establish the transfer from high-accuracy standards to field applicable methods Employment of characterised exposure chambers as well as field sites for validation and comparison experiments to test and evaluate the performance of different instruments and measurement methods at ammonia amount fractions of the ambient air. The active exchange in workshops and inter-comparisons, publications in technical journals as well as presentations at relevant conferences and standardisation bodies will transfer the knowledge to stakeholders and end-users. The work has been carried out in the framework of the EMRP. The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.
Receptive Field Inference with Localized Priors
Park, Mijung; Pillow, Jonathan W.
2011-01-01
The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-01-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-05-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.
A functional equation for the specular reflection of rays.
Le Bot, A
2002-10-01
This paper aims to generalize the "radiosity method" when applied to specular reflection. Within the field of thermics, the radiosity method is also called the "standard procedure." The integral equation for incident energy, which is usually derived for diffuse reflection, is replaced by a more appropriate functional equation. The latter is used to solve some specific problems and it is shown that all the classical features of specular reflection, for example, the existence of image sources, are embodied within this equation. This equation can be solved with the ray-tracing technique, despite the implemented mathematics being quite different. Several interesting features of the energy field are presented.
Flotemersch, Joseph E; North, Sheila; Blocksom, Karen A
2014-02-01
Benthic macroinvertebrates are sampled in streams and rivers as one of the assessment elements of the US Environmental Protection Agency's National Rivers and Streams Assessment. In a 2006 report, the recommendation was made that different yet comparable methods be evaluated for different types of streams (e.g., low gradient vs. high gradient). Consequently, a research element was added to the 2008-2009 National Rivers and Streams Assessment to conduct a side-by-side comparison of the standard macroinvertebrate sampling method with an alternate method specifically designed for low-gradient wadeable streams and rivers that focused more on stream edge habitat. Samples were collected using each method at 525 sites in five of nine aggregate ecoregions located in the conterminous USA. Methods were compared using the benthic macroinvertebrate multimetric index developed for the 2006 Wadeable Streams Assessment. Statistical analysis did not reveal any trends that would suggest the overall assessment of low-gradient streams on a regional or national scale would change if the alternate method was used rather than the standard sampling method, regardless of the gradient cutoff used to define low-gradient streams. Based on these results, the National Rivers and Streams Survey should continue to use the standard field method for sampling all streams.
Jabs, Douglas A; Nussenblatt, Robert B; Rosenbaum, James T
2005-09-01
To begin a process of standardizing the methods for reporting clinical data in the field of uveitis. Consensus workshop. Members of an international working group were surveyed about diagnostic terminology, inflammation grading schema, and outcome measures, and the results used to develop a series of proposals to better standardize the use of these entities. Small groups employed nominal group techniques to achieve consensus on several of these issues. The group affirmed that an anatomic classification of uveitis should be used as a framework for subsequent work on diagnostic criteria for specific uveitic syndromes, and that the classification of uveitis entities should be on the basis of the location of the inflammation and not on the presence of structural complications. Issues regarding the use of the terms "intermediate uveitis," "pars planitis," "panuveitis," and descriptors of the onset and course of the uveitis were addressed. The following were adopted: standardized grading schema for anterior chamber cells, anterior chamber flare, and for vitreous haze; standardized methods of recording structural complications of uveitis; standardized definitions of outcomes, including "inactive" inflammation, "improvement'; and "worsening" of the inflammation, and "corticosteroid sparing," and standardized guidelines for reporting visual acuity outcomes. A process of standardizing the approach to reporting clinical data in uveitis research has begun, and several terms have been standardized.
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...
. To assess the ambient concentration levels of the six criteria air pollutants regulated by National Ambient Air Quality Standards (NAAQS), the U.S. Environmental Protection Agency (EPA) developed a systematic framework of: (a) field measurements of ambient air pollutant levels ...
Mapping Pesticide Partition Coefficients By Electromagnetic Induction
USDA-ARS?s Scientific Manuscript database
A potential method for reducing pesticide leaching is to base application rates on the leaching potential of a specific chemical and soil combination. However, leaching is determined in part by the partitioning of the chemical between the soil and soil solution, which varies across a field. Standard...
CTEPP STANDARD OPERATING PROCEDURE FOR PROCESSING COMPLETED DATA FORMS (SOP-4.10)
This SOP describes the methods for processing completed data forms. Key components of the SOP include (1) field editing, (2) data form Chain-of-Custody, (3) data processing verification, (4) coding, (5) data entry, (6) programming checks, (7) preparation of data dictionaries, cod...
Fatemeh, Dehghan; Reza, Zolfaghari Mohammad; Mohammad, Arjomandzadegan; Salomeh, Kalantari; Reza, Ahmari Gholam; Hossein, Sarmadian; Maryam, Sadrnia; Azam, Ahmadi; Mana, Shojapoor; Negin, Najarian; Reza, Kasravi Alii; Saeed, Falahat
2014-01-01
Objective To analyse molecular detection of coliforms and shorten the time of PCR. Methods Rapid detection of coliforms by amplification of lacZ and uidA genes in a multiplex PCR reaction was designed and performed in comparison with most probably number (MPN) method for 16 artificial and 101 field samples. The molecular method was also conducted on isolated coliforms from positive MPN samples; standard sample for verification of microbial method certificated reference material; isolated strains from certificated reference material and standard bacteria. The PCR and electrophoresis parameters were changed for reducing the operation time. Results Results of PCR for lacZ and uidA genes were similar in all of standard, operational and artificial samples and showed the 876 bp and 147 bp bands of lacZ and uidA genes by multiplex PCR. PCR results were confirmed by MPN culture method by sensitivity 86% (95% CI: 0.71-0.93). Also the total execution time, with a successful change of factors, was reduced to less than two and a half hour. Conclusions Multiplex PCR method with shortened operation time was used for the simultaneous detection of total coliforms and Escherichia coli in distribution system of Arak city. It's recommended to be used at least as an initial screening test, and then the positive samples could be randomly tested by MPN. PMID:25182727
Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans
2014-08-01
Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
NASA Astrophysics Data System (ADS)
Morales-Delgado, V. F.; Gómez-Aguilar, J. F.; Taneco-Hernandez, M. A.
2017-12-01
In this work we propose fractional differential equations for the motion of a charged particle in electric, magnetic and electromagnetic fields. Exact solutions are obtained for the fractional differential equations by employing the Laplace transform method. The temporal fractional differential equations are considered in the Caputo-Fabrizio-Caputo and Atangana-Baleanu-Caputo sense. Application examples consider constant, ramp and harmonic fields. In addition, we present numerical results for different values of the fractional order. In all cases, when α = 1, we recover the standard electrodynamics.
NASA Technical Reports Server (NTRS)
Schmitt, Jeff G.; Stahnke, Brian
2017-01-01
This report describes test results from an assessment of the acoustically treated 9x15 Foot Low Speed Wind Tunnel at the NASA Glenn Research Center in Cleveland, Ohio in July of 2016. The tests were conducted in accordance with the recently adopted international standard ISO 26101-2012 on Qualification of Free Field Test Environments. This method involves moving a microphone relative to a source and comparing the sound pressure level versus distance measurements with theoretical inverse square law spreading.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
NASA Astrophysics Data System (ADS)
Kowalski, John B.; Herring, Craig; Baryschpolec, Lisa; Reger, John; Patel, Jay; Feeney, Mary; Tallentire, Alan
2002-08-01
The International and European standards for radiation sterilization require evidence of the effectiveness of a minimum sterilization dose of 25 kGy but do not provide detailed guidance on how this evidence can be generated. An approach, designated VD max, has recently been described and computer evaluated to provide safe and unambiguous substantiation of a 25 kGy sterilization dose. The approach has been further developed into a practical method, which has been subjected to field evaluations at three manufacturing facilities which produce different types of medical devices. The three facilities each used a different overall evaluation strategy: Facility A used VD max for quarterly dose audits; Facility B compared VD max and Method 1 in side-by-side parallel experiments; and Facility C, a new facility at start-up, used VD max for initial substantiation of 25 kGy and subsequent quarterly dose audits. A common element at all three facilities was the use of 10 product units for irradiation in the verification dose experiment. The field evaluations of the VD max method were successful at all three facilities; they included many different types of medical devices/product families with a wide range of average bioburden and sample item portion values used in the verification dose experiments. Overall, around 500 verification dose experiments were performed and no failures were observed. In the side-by-side parallel experiments, the outcomes of the VD max experiments were consistent with the outcomes observed with Method 1. The VD max approach has been extended to sterilization doses >25 and <25 kGy; verification doses have been derived for sterilization doses of 15, 20, 30, and 35 kGy. Widespread application of the VD max method for doses other than 25 kGy must await controlled field evaluations and the development of appropriate specifications/standards.
General introduction for the “National Field Manual for the Collection of Water-Quality Data”
,
2018-02-28
BackgroundAs part of its mission, the U.S. Geological Survey (USGS) collects data to assess the quality of our Nation’s water resources. A high degree of reliability and standardization of these data are paramount to fulfilling this mission. Documentation of nationally accepted methods used by USGS personnel serves to maintain consistency and technical quality in data-collection activities. “The National Field Manual for the Collection of Water-Quality Data” (NFM) provides documented guidelines and protocols for USGS field personnel who collect water-quality data. The NFM provides detailed, comprehensive, and citable procedures for monitoring the quality of surface water and groundwater. Topics in the NFM include (1) methods and protocols for sampling water resources, (2) methods for processing samples for analysis of water quality, (3) methods for measuring field parameters, and (4) specialized procedures, such as sampling water for low levels of mercury and organic wastewater chemicals, measuring biological indicators, and sampling bottom sediment for chemistry. Personnel who collect water-quality data for national USGS programs and projects, including projects supported by USGS cooperative programs, are mandated to use protocols provided in the NFM per USGS Office of Water Quality Technical Memorandum 2002.13. Formal training, for example, as provided in the USGS class, “Field Water-Quality Methods for Groundwater and Surface Water,” and field apprenticeships supplement the guidance provided in the NFM and ensure that the data collected are high quality, accurate, and scientifically defensible.
Template‐based field map prediction for rapid whole brain B0 shimming
Shi, Yuhang; Vannesjo, S. Johanna; Miller, Karla L.
2017-01-01
Purpose In typical MRI protocols, time is spent acquiring a field map to calculate the shim settings for best image quality. We propose a fast template‐based field map prediction method that yields near‐optimal shims without measuring the field. Methods The template‐based prediction method uses prior knowledge of the B0 distribution in the human brain, based on a large database of field maps acquired from different subjects, together with subject‐specific structural information from a quick localizer scan. The shimming performance of using the template‐based prediction is evaluated in comparison to a range of potential fast shimming methods. Results Static B0 shimming based on predicted field maps performed almost as well as shimming based on individually measured field maps. In experimental evaluations at 7 T, the proposed approach yielded a residual field standard deviation in the brain of on average 59 Hz, compared with 50 Hz using measured field maps and 176 Hz using no subject‐specific shim. Conclusions This work demonstrates that shimming based on predicted field maps is feasible. The field map prediction accuracy could potentially be further improved by generating the template from a subset of subjects, based on parameters such as head rotation and body mass index. Magn Reson Med 80:171–180, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29193340
LI, FENFANG; WILKENS, LYNNE R.; NOVOTNY, RACHEL; FIALKOWSKI, MARIE K.; PAULINO, YVETTE C.; NELSON, RANDALL; BERSAMIN, ANDREA; MARTIN, URSULA; DEENIK, JONATHAN; BOUSHEY, CAROL J.
2016-01-01
Objectives Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children’s Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai‘i. Methods Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. Results From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Conclusions Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. PMID:26457888
The Use of Terrestrial Laser Scanning for Determining the Driver’s Field of Vision
Zemánek, Tomáš; Cibulka, Miloš; Skoupil, Jaromír
2017-01-01
Terrestrial laser scanning (TLS) is currently one of the most progressively developed methods in obtaining information about objects and phenomena. This paper assesses the TLS possibilities in determining the driver’s field of vision in operating agricultural and forest machines with movable and immovable components in comparison to the method of using two light point sources for the creation of shade images according to ISO (International Organization for Standardization) 5721-1. Using the TLS method represents a minimum time saving of 55% or more, according to the project complexity. The values of shading ascertained by using the shadow cast method by the point light sources are generally overestimated and more distorted for small cabin structural components. The disadvantage of the TLS method is the scanner’s sensitivity to a soiled or scratched cabin windscreen and to the glass transparency impaired by heavy tinting. PMID:28902177
Full-Field Strain Methods for Investigating Failure Mechanisms in Triaxial Braided Composites
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Goldberg, Robert K.; Roberts, Gary D.
2008-01-01
Recent advancements in braiding technology have led to commercially viable manufacturing approaches for making large structures with complex shape out of triaxial braided composite materials. In some cases, the static load capability of structures made using these materials has been higher than expected based on material strength properties measured using standard coupon tests. A more detailed investigation of deformation and failure processes in large-unit-cell-size triaxial braid composites is needed to evaluate the applicability of standard test methods for these materials and to develop alternative testing approaches. This report presents some new techniques that have been developed to investigate local deformation and failure using digital image correlation techniques. The methods were used to measure both local and global strains during standard straight-sided coupon tensile tests on composite materials made with 12- and 24-k yarns and a 0 /+60 /-60 triaxial braid architecture. Local deformation and failure within fiber bundles was observed and correlations were made between these local failures and global composite deformation and strength.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
A simple calculation method for determination of equivalent square field
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-01-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801
NASA Astrophysics Data System (ADS)
Roger-Estrade, Jean; Boizard, Hubert; Peigné, Josephine; Sasal, Maria Carolina; Guimaraes, Rachel; Piron, Denis; Tomis, Vincent; Vian, Jean-François; Cadoux, Stephane; Ralisch, Ricardo; Filho, Tavares; Heddadj, Djilali; de Battista, Juan; Duparque, Annie
2016-04-01
In France, agronomists have studied the effects of cropping systems on soil structure, using a field method based on a visual description of soil structure. The "profil cultural" method (Manichon and Gautronneau, 1987) has been designed to perform a field diagnostic of the effects of tillage and compaction on soil structure dynamics. This method is of great use to agronomists improving crop management for a better preservation of soil structure. However, this method was developed and mainly used in conventional tillage systems, with ploughing. As several forms of reduced, minimum and no tillage systems are expanding in many parts of the world, it is necessary to re-evaluate the ability of this method to describe and interpret soil macrostructure in unploughed situations. In unploughed fields, soil structure dynamics of untilled layers is mainly driven by compaction and regeneration by natural agents (climatic conditions, root growth and macrofauna) and it is of major importance to evaluate the importance of these natural processes on soil structure regeneration. These concerns have led us to adapt the standard method and to propose amendments based on a series of field observations and experimental work in different situations of cropping systems, soil types and climatic conditions. We improved the description of crack type and we introduced an index of biological activity, based on the visual examination of clods. To test the improved method, a comparison with the reference method was carried out and the ability of the "profil cultural" method to make a diagnosis was tested on five experiments in France, Brazil and Argentina. Using the improved method, the impact of cropping systems on soil functioning was better assessed when natural processes were integrated into the description.
Incorporating Geoscience, Field Data Collection Workflows into Software Developed for Mobile Devices
NASA Astrophysics Data System (ADS)
Vieira, D. A.; Mookerjee, M.; Matsa, S.
2014-12-01
Modern geological sciences depend heavily on investigating the natural world in situ, i.e., within "the field," as well as managing data collections in the light of evolving advances in technology and cyberinfrastructure. To accelerate the rate of scientific discovery, we need to expedite data collection and management in such a way so as to not interfere with the typical geoscience, field workflow. To this end, we suggest replacing traditional analog methods of data collection, such as the standard field notebook and compass, with primary digital data collection applications. While some field data collecting apps exist for both the iOS and android operating systems, they do not communicate with each other in an organized data collection effort. We propose the development of a mobile app that coordinates the collection of GPS, photographic, and orientation data, along with field observations. Additionally, this application should be able to pair with other devices in order to incorporate other sensor data. In this way, the app can generate a single file that includes all field data elements and can be synced to the appropriate database with ease and efficiency. We present here a prototype application that attempts to illustrate how digital collection can be integrated into a "typical" geoscience, field workflow. The purpose of our app is to get field scientists to think about specific requirements for the development of a unified field data collection application. One fundamental step in the development of such an app is the community-based, decision-making process of adopting certain data/metadata standards and conventions. In August of 2014, on a four-day field trip to Yosemite National Park and Owens Valley, we engaged a group of field-based geologists and computer/cognitive scientists to start building a community consensus on these cyberinfrastructure-related issues. Discussing the unique problems of field data recording, conventions, storage, representation, standardization, documentation, and management, while in the field, creates a unique opportunity to address critical issues with regards to advancing the development of cyberinfrastructure for the field-based geosciences while facilitating the combining of our datasets with those of other geoscience subdisciplines.
Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Jinxing; Pu Zuyin; Xie Lun
Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less
Aeroacoustic directivity via wave-packet analysis of mean or base flows
NASA Astrophysics Data System (ADS)
Edstrand, Adam; Schmid, Peter; Cattafesta, Louis
2017-11-01
Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.
Bekiroglu, Somer; Myrberg, Olle; Ostman, Kristina; Ek, Marianne; Arvidsson, Torbjörn; Rundlöf, Torgny; Hakkarainen, Birgit
2008-08-05
A 1H-nuclear magnetic resonance (NMR) spectroscopy method for quantitative determination of benzethonium chloride (BTC) as a constituent of grapefruit seed extract was developed. The method was validated, assessing its specificity, linearity, range, and precision, as well as accuracy, limit of quantification and robustness. The method includes quantification using an internal reference standard, 1,3,5-trimethoxybenzene, and regarded as simple, rapid, and easy to implement. A commercial grapefruit seed extract was studied and the experiments were performed on spectrometers operating at two different fields, 300 and 600 MHz for proton frequencies, the former with a broad band (BB) probe and the latter equipped with both a BB probe and a CryoProbe. The concentration average for the product sample was 78.0, 77.8 and 78.4 mg/ml using the 300 BB probe, the 600MHz BB probe and CryoProbe, respectively. The standard deviation and relative standard deviation (R.S.D., in parenthesis) for the average concentrations was 0.2 (0.3%), 0.3 (0.4%) and 0.3mg/ml (0.4%), respectively.
NASA Astrophysics Data System (ADS)
Wurdiyanto, G.; Candra, H.
2016-03-01
The standardization of radioactive sources (125I, 131I, 99mTc and 18F) to calibrate the nuclear medicine equipment had been carried out in PTKMR-BATAN. This is necessary because the radioactive sources used in the field of nuclear medicine has a very short half-life in other that to obtain a quality measurement results require special treatment. Besides that, the use of nuclear medicine techniques in Indonesia develop rapidly. All the radioactive sources were prepared by gravimetric methods. Standardization of 125I has been carried out by photon- photon coincidence methods, while the others have been carried out by gamma spectrometry methods. The standar sources are used to calibrate a Capintec CRC-7BT radionuclide calibrator. The results shows that calibration factor for Capintec CRC-7BT dose calibrator is 1,03; 1,02; 1,06; and 1,04 for 125I, 131I, 99mTc and 18F respectively, by about 5 to 6% of the expanded uncertainties.
Near-infrared fluorescence image quality test methods for standardized performance evaluation
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua
2017-03-01
Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.
Ideal flux field dielectric concentrators.
García-Botella, Angel
2011-10-01
The concept of the vector flux field was first introduced as a photometrical theory and later developed in the field of nonimaging optics; it has provided new perspectives in the design of concentrators, overcoming standard ray tracing techniques. The flux field method has shown that reflective concentrators with the geometry of the field lines achieve the theoretical limit of concentration. In this paper we study the role of surfaces orthogonal to the field vector J. For rotationally symmetric systems J is orthogonal to its curl, and then a family of surfaces orthogonal to the lines of J exists, which can be called the family of surfaces of constant pseudopotential. Using the concept of the flux tube, it is possible to demonstrate that refractive concentrators with the shape of these pseudopotential surfaces achieve the theoretical limit of concentration.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-01
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ɛ(r) is close to one everywhere inside the protein. The Gaussian widths σi of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σi. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.
Cloud Computing with Context Cameras
NASA Astrophysics Data System (ADS)
Pickles, A. J.; Rosing, W. E.
2016-05-01
We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.
Extension of the general thermal field equation for nanosized emitters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyritsakis, A., E-mail: akyritsos1@gmail.com; Xanthakis, J. P.
2016-01-28
During the previous decade, Jensen et al. developed a general analytical model that successfully describes electron emission from metals both in the field and thermionic regimes, as well as in the transition region. In that development, the standard image corrected triangular potential barrier was used. This barrier model is valid only for planar surfaces and therefore cannot be used in general for modern nanometric emitters. In a recent publication, the authors showed that the standard Fowler-Nordheim theory can be generalized for highly curved emitters if a quadratic term is included to the potential model. In this paper, we extend thismore » generalization for high temperatures and include both the thermal and intermediate regimes. This is achieved by applying the general method developed by Jensen to the quadratic barrier model of our previous publication. We obtain results that are in good agreement with fully numerical calculations for radii R > 4 nm, while our calculated current density differs by a factor up to 27 from the one predicted by the Jensen's standard General-Thermal-Field (GTF) equation. Our extended GTF equation has application to modern sharp electron sources, beam simulation models, and vacuum breakdown theory.« less
Peterson, Sean M.; Streby, Henry M.; Lehman, Justin A.; Kramer, Gunnar R.; Fish, Alexander C.; Andersen, David E.
2015-01-01
We compared the efficacy of standard nest-searching methods with finding nests via radio-tagged birds to assess how search technique influenced our determination of nest-site characteristics and nest success for Golden-winged Warblers (Vermivora chrysoptera). We also evaluated the cost-effectiveness of using radio-tagged birds to find nests. Using standard nest-searching techniques for 3 populations, we found 111 nests in locations with habitat characteristics similar to those described in previous studies: edges between forest and relatively open areas of early successional vegetation or shrubby wetlands, with 43% within 5 m of forest edge. The 83 nests found using telemetry were about half as likely (23%) to be within 5 m of forest edge. We spent little time searching >25 m into forest because published reports state that Golden-winged Warblers do not nest there. However, 14 nests found using telemetry (18%) were >25 m into forest. We modeled nest success using nest-searching method, nest age, and distance to forest edge as explanatory variables. Nest-searching method explained nest success better than nest age alone; we estimated that nests found using telemetry were 10% more likely to fledge young than nests found using standard nest-searching methods. Although radio-telemetry was more expensive than standard nest searching, the cost-effectiveness of both methods differed depending on searcher experience, amount of equipment owned, and bird population density. Our results demonstrate that telemetry can be an effective method for reducing bias in Golden-winged Warbler nest samples, can be cost competitive with standard nest-searching methods in some situations, and is likely to be a useful approach for finding nests of other forest-nesting songbirds.
Method for determining the hardness of strain hardening articles of tungsten-nickel-iron alloy
Wallace, Steven A.
1984-01-01
The present invention is directed to a rapid nondestructive method for determining the extent of strain hardening in an article of tungsten-nickel-iron alloy. The method comprises saturating the article with a magnetic field from a permanent magnet, measuring the magnetic flux emanating from the article, comparing the measurements of the magnetic flux emanating from the article with measured magnetic fluxes from similarly shaped standards of the alloy with known amounts of strain hardening to determine the hardness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, J; Liu, X
2016-06-15
Purpose: To perform a quantitative study to verify that the mechanical field center coincides with the radiation field center when both are off from the isocenter during the single-isocenter technique in linear accelerator-based SRS/SBRT procedure to treat multiple lesions. Methods: We developed an innovative method to measure this accuracy, called the off-isocenter Winston-Lutz test, and here we provide a practical clinical guideline to implement this technique. We used ImagePro V.6 to analyze images of a Winston-Lutz phantom obtained using a Varian 21EX linear accelerator with an electronic portal imaging device, set up as for single-isocenter SRS/SBRT for multiple lesions. Wemore » investigated asymmetry field centers that were 3 cm and 5 cm away from the isocenter, as well as performing the standard Winston-Lutz test. We used a special beam configuration to acquire images while avoiding collision, and we investigated both jaw and multileaf collimation. Results: For the jaw collimator setting, at 3 cm off-isocenter, the mechanical field deviated from the radiation field by about 2.5 mm; at 5 cm, the deviation was above 3 mm, up to 4.27 mm. For the multileaf collimator setting, at 3 cm off-isocenter, the deviation was below 1 mm; at 5 cm, the deviation was above 1 mm, up to 1.72 mm, which is 72% higher than the tolerance threshold. Conclusion: These results indicated that the further the asymmetry field center is from the machine isocenter, the larger the deviation of the mechanical field from the radiation field, and the distance between the center of the asymmetry field and the isocenter should not exceed 3 cm in of our clinic. We recommend that every clinic that uses linear accelerator, multileaf collimator-based SRS/SBRT perform the off-isocenter Winston-Lutz test in addition to the standard Winston-Lutz test and use their own deviation data to design the treatment plan.« less
SU-F-T-472: Validation of Absolute Dose Measurements for MR-IGRT With and Without Magnetic Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, O; Li, H; Goddu, S
Purpose: To validate absolute dose measurements for a MR-IGRT system without presence of the magnetic field. Methods: The standard method (AAPM’s TG-51) of absolute dose measurement with ionization chambers was tested with and without the presence of the magnetic field for a clinical 0.32-T Co-60 MR-IGRT system. Two ionization chambers were used - the Standard Imaging (Madison, WI) A18 (0.123 cc) and the PTW (Freiburg, Germany). A previously reported Monte Carlo simulation suggested a difference on the order of 0.5% for dose measured with and without the presence of the magnetic field, but testing this was not possible until anmore » engineering solution to allow the radiation system to be used without the nominal magnetic field was found. A previously identified effect of orientation in the magnetic field was also tested by placing the chamber either parallel or perpendicular to the field and irradiating from two opposing angles (90 and 270). Finally, the Imaging and Radiation Oncology Core provided OSLD detectors for five irradiations each with and without the field - with two heads at both 0 and 90 degrees, and one head at 90 degrees only as it doesn’t reach 0 (IEC convention). Results: For the TG-51 comparison, expected dose was obtained by decaying values measured at the time of source installation. The average measured difference was 0.4%±0.12% for A18 and 0.06%±0.15% for Farmer chamber. There was minimal (0.3%) orientation dependence without the magnetic field for the A18 chamber, while previous measurements with the magnetic field had a deviation of 3.2% with chamber perpendicular to magnetic field. Results reported by IROC for the OSLDs with and without the field had a maximum difference of 2%. Conclusion: Accurate absolute dosimetry was verified by measurement under the same conditions with and without the magnetic field for both ionization chambers and independently-verifiable OSLDs.« less
Adverse drug event reporting systems: a systematic review
Peddie, David; Wickham, Maeve E.; Badke, Katherin; Small, Serena S.; Doyle‐Waters, Mary M.; Balka, Ellen; Hohl, Corinne M.
2016-01-01
Aim Adverse drug events (ADEs) are harmful and unintended consequences of medications. Their reporting is essential for drug safety monitoring and research, but it has not been standardized internationally. Our aim was to synthesize information about the type and variety of data collected within ADE reporting systems. Methods We developed a systematic search strategy, applied it to four electronic databases, and completed an electronic grey literature search. Two authors reviewed titles and abstracts, and all eligible full‐texts. We extracted data using a standardized form, and discussed disagreements until reaching consensus. We synthesized data by collapsing data elements, eliminating duplicate fields and identifying relationships between reporting concepts and data fields using visual analysis software. Results We identified 108 ADE reporting systems containing 1782 unique data fields. We mapped them to 33 reporting concepts describing patient information, the ADE, concomitant and suspect drugs, and the reporter. While reporting concepts were fairly consistent, we found variability in data fields and corresponding response options. Few systems clarified the terminology used, and many used multiple drug and disease dictionaries such as the Medical Dictionary for Regulatory Activities (MedDRA). Conclusion We found substantial variability in the data fields used to report ADEs, limiting the comparability of ADE data collected using different reporting systems, and undermining efforts to aggregate data across cohorts. The development of a common standardized data set that can be evaluated with regard to data quality, comparability and reporting rates is likely to optimize ADE data and drug safety surveillance. PMID:27016266
Integrative Bioengineering Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eddington, David; Magin,L,Richard; Hetling, John
2009-01-09
Microfabrication enables many exciting experimental possibilities for medicine and biology that are not attainable through traditional methods. However, in order for microfabricated devices to have an impact they must not only provide a robust solution to a current unmet need, but also be simple enough to seamlessly integrate into standard protocols. Broad dissemination of bioMEMS has been stymied by the common aim of replacing established and well accepted protocols with equally or more complex devices, methods, or materials. The marriage of a complex, difficult to fabricate bioMEMS device with a highly variable biological system is rarely successful. Instead, the designmore » philosophy of my lab aims to leverage a beneficial microscale phenomena (e.g. fast diffusion at the microscale) within a bioMEMS device and adapt to established methods (e.g. multiwell plate cell culture) and demonstrate a new paradigm for the field (adapt instead of replace). In order for the field of bioMEMS to mature beyond novel proof-of-concept demonstrations, researchers must focus on developing systems leveraging these phenomena and integrating into standard labs, which have largely been ignored. Towards this aim, the Integrative Bioengineering Institute has been established.« less
Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F
2014-07-01
The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.
Efficiency of snake sampling methods in the Brazilian semiarid region.
Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z
2013-09-01
The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.
Time delayed Ensemble Nudging Method
NASA Astrophysics Data System (ADS)
An, Zhe; Abarbanel, Henry
Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.
The Next-Generation PCR-Based Quantification Method for Ambient Waters: Digital PCR.
Cao, Yiping; Griffith, John F; Weisberg, Stephen B
2016-01-01
Real-time quantitative PCR (qPCR) is increasingly being used for ambient water monitoring, but development of digital polymerase chain reaction (digital PCR) has the potential to further advance the use of molecular techniques in such applications. Digital PCR refines qPCR by partitioning the sample into thousands to millions of miniature reactions that are examined individually for binary endpoint results, with DNA density calculated from the fraction of positives using Poisson statistics. This direct quantification removes the need for standard curves, eliminating the labor and materials associated with creating and running standards with each batch, and removing biases associated with standard variability and mismatching amplification efficiency between standards and samples. Confining reactions and binary endpoint measurements to small partitions also leads to other performance advantages, including reduced susceptibility to inhibition, increased repeatability and reproducibility, and increased capacity to measure multiple targets in one analysis. As such, digital PCR is well suited for ambient water monitoring applications and is particularly advantageous as molecular methods move toward autonomous field application.
Kothari, Ruchi; Bokariya, Pradeep; Singh, Ramji; Singh, Smita; Narang, Purvasha
2014-01-01
To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD) of Humphrey visual field could be associated with visual evoked potential (VEP) parameters of patients having primary open angle glaucoma (POAG). Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP) were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field) and displayed on VEP monitor (colour 14″) by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II). The results of our study indicate that there is a highly significant (P<0.001) negative correlation of P100 amplitude and a statistically significant (P<0.05) positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student's t-test. Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.
Lafont, F.; Ribeiro-Palau, R.; Kazazis, D.; Michon, A.; Couturaud, O.; Consejo, C.; Chassagne, T.; Zielinski, M.; Portail, M.; Jouault, B.; Schopfer, F.; Poirier, W.
2015-01-01
Replacing GaAs by graphene to realize more practical quantum Hall resistance standards (QHRS), accurate to within 10−9 in relative value, but operating at lower magnetic fields than 10 T, is an ongoing goal in metrology. To date, the required accuracy has been reported, only few times, in graphene grown on SiC by Si sublimation, under higher magnetic fields. Here, we report on a graphene device grown by chemical vapour deposition on SiC, which demonstrates such accuracies of the Hall resistance from 10 T up to 19 T at 1.4 K. This is explained by a quantum Hall effect with low dissipation, resulting from strongly localized bulk states at the magnetic length scale, over a wide magnetic field range. Our results show that graphene-based QHRS can replace their GaAs counterparts by operating in as-convenient cryomagnetic conditions, but over an extended magnetic field range. They rely on a promising hybrid and scalable growth method and a fabrication process achieving low-electron-density devices. PMID:25891533
Iowa Commercial Pesticide Applicator Manual, Category 1C: Agricultural Crop Disease Control. CS-11.
ERIC Educational Resources Information Center
Nyvall, Robert F.; Ryan, Stephen O.
This manual provides information needed to meet specific standards for certification as a pesticide applicator. It summarizes the economically important diseases of field and forage crops such as corn, soybeans and alfalfa. Special attention is given to pesticide application methods and safety. (CS)
Modelling rollover behaviour of exacavator-based forest machines
M.W. Veal; S.E. Taylor; Robert B. Rummer
2003-01-01
This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...
MACRO- MICRO-PURGE SOIL GAS SAMPLING METHODS FOR THE COLLECTION OF CONTAMINANT VAPORS
Purging influence on soil gas concentrations for volatile organic compounds (VOCs), as affected by sampling tube inner diameter and sampling depth (i.e., dead-space purge volume), was evaluated at different field sites. A macro-purge sampling system consisted of a standard hollo...
CTEPP STANDARD OPERATING PROCEDURE FOR SHIPPING AND STORING DATA COLLECTION FORMS (SOP-3.12)
This SOP describes the method for shipping and storing data collection forms. All data collection forms will be processed in the Battelle North Carolina (NC) office. The Ohio field staff will ship the completed data collection forms to the Battelle NC office.
From Knowledge to Practice: A Gifted Educator's Journey
ERIC Educational Resources Information Center
Reinhard, Jessica J.
2016-01-01
This qualitative case study of a third-year teacher of intermediate students in a self-contained gifted education classroom uncovers the relationship between knowledge of pedagogical practices from national gifted education standards and their transfer to classroom practice. Ethnographic methods of interviews, field observations, lesson documents,…
Update on the Activities of The NELAC Institute (TNI)
2012-03-28
IEC 17025 :2005 How to Manage an Effective Quality Management System TNI Cooperative Agreements with EPA Former (2006 – 2010) $400,000 for...Correct Use of Standard Methods Accreditation Demonstrates Competency for Field Activities Getting Ready for NEFAP A Practical Foundation in ISO
7 CFR 205.206 - Crop pest, weed, and disease management practice standard.
Code of Federal Regulations, 2010 CFR
2010-01-01
... problems may be controlled through mechanical or physical methods including but not limited to: (1... problems may be controlled through: (1) Mulching with fully biodegradable materials; (2) Mowing; (3...) Plastic or other synthetic mulches: Provided, That, they are removed from the field at the end of the...
7 CFR 205.206 - Crop pest, weed, and disease management practice standard.
Code of Federal Regulations, 2011 CFR
2011-01-01
... problems may be controlled through mechanical or physical methods including but not limited to: (1... problems may be controlled through: (1) Mulching with fully biodegradable materials; (2) Mowing; (3...) Plastic or other synthetic mulches: Provided, That, they are removed from the field at the end of the...
Magneto-hydrodynamical model for plasma
NASA Astrophysics Data System (ADS)
Liu, Ruikuan; Yang, Jiayan
2017-10-01
Based on the Newton's second law and the Maxwell equations for the electromagnetic field, we establish a new 3-D incompressible magneto-hydrodynamics model for the motion of plasma under the standard Coulomb gauge. By using the Galerkin method, we prove the existence of a global weak solution for this new 3-D model.
Moridis, George J.; Oldenburg, Curtis M.
2001-01-01
Disclosed are processes for monitoring and control of underground contamination, which involve the application of ferrofluids. Two broad uses of ferrofluids are described: (1) to control liquid movement by the application of strong external magnetic fields; and (2) to image liquids by standard geophysical methods.
Reflective Field Experiences for Success in Teaching Elementary Mathematics
ERIC Educational Resources Information Center
Robards, Shirley N.
2009-01-01
In this paper, the author discusses the major components of a junior level pedagogy course for elementary education majors learning to teach mathematics. The course reviews content and knowledge of the teacher candidates and introduces methods and materials for teaching elementary mathematics using the Standards or benchmarks from the National…
NASA Astrophysics Data System (ADS)
Takano, Yukinori; Hirata, Akimasa; Fujiwara, Osamu
Human exposed to electric and/or magnetic fields at low frequencies may cause direct effect such as nerve stimulation and excitation. Therefore, basic restriction is regulated in terms of induced current density in the ICNIRP guidelines and in-situ electric field in the IEEE standard. External electric or magnetic field which does not produce induced quantities exceeding the basic restriction is used as a reference level. The relationship between the basic restriction and reference level for low-frequency electric and magnetic fields has been investigated using European anatomic models, while limited for Japanese model, especially for electric field exposures. In addition, that relationship has not well been discussed. In the present study, we calculated the induced quantities in anatomic Japanese male and female models exposed to electric and magnetic fields at reference level. A quasi static finite-difference time-domain (FDTD) method was applied to analyze this problem. As a result, spatially averaged induced current density was found to be more sensitive to averaging algorithms than that of in-situ electric field. For electric and magnetic field exposure at the ICNIRP reference level, the maximum values of the induced current density for different averaging algorithm were smaller than the basic restriction for most cases. For exposures at the reference level in the IEEE standard, the maximum electric fields in the brain were larger than the basic restriction in the brain while smaller for the spinal cord and heart.
Ploc, Ondrej; Kubancak, Jan; Sihver, Lembit; Uchihori, Yukio; Jakubek, Jan; Ambrozova, Iva; Molokanov, Alexander; Pinsky, Lawrence
2014-01-01
Objective of our research was to explore capabilities of Timepix for its use as a single dosemeter and LET spectrometer in mixed radiation fields created by heavy ions. We exposed it to radiation field (i) at heavy ion beams at HIMAC, Chiba, Japan, (ii) in the CERN's high-energy reference field (CERF) facility at Geneva, France/Switzerland, (iii) in the exposure room of the proton therapy laboratory at JINR, Dubna, Russia, and (iv) onboard aircraft. We compared the absolute values of dosimetric quantities obtained with Timepix and with other dosemeters and spectrometers like tissue-equivalent proportional counter (TEPC) Hawk, silicon detector Liulin, and track-etched detectors (TEDs).
Comparability between various field and laboratory wood-stove emission-measurement methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrillis, R.C.; Jaasma, D.R.
1991-01-01
The paper compares various field and laboratory woodstove emission measurement methods. In 1988, the U.S. EPA promulgated performance standards for residential wood heaters (woodstoves). Over the past several years, a number of field studies have been undertaken to determine the actual level of emission reduction achieved by new technology woodstoves in everyday use. The studies have required the development and use of particulate and gaseous emission sampling equipment compatible with operation in private homes. Since woodstoves are tested for certification in the laboratory using EPA Methods 5G and 5H, it is of interest to determine the correlation between these regulatorymore » methods and the inhouse equipment. Two inhouse sampling systems have been used most widely: one is an intermittent, pump-driven particulate sampler that collects particulate and condensible organics on a filter and organic adsorbent resin; and the other uses an evacuated cylinder as the motive force and particulate and condensible organics are collected in a condenser and dual filter. Both samplers can operate unattended for 1-week periods. A large number of tests have been run comparing Methods 5G and 5H to both samplers. The paper presents these comparison data and determines the relationships between regulations and field samplers.« less
Schalken, Naomi; Rietbergen, Charlotte
2017-01-01
Objective: The goal of this systematic review was to examine the reporting quality of the method section of quantitative systematic reviews and meta-analyses from 2009 to 2016 in the field of industrial and organizational psychology with the help of the Meta-Analysis Reporting Standards (MARS), and to update previous research, such as the study of Aytug et al. (2012) and Dieckmann et al. (2009). Methods: A systematic search for quantitative systematic reviews and meta-analyses was conducted in the top 10 journals in the field of industrial and organizational psychology between January 2009 and April 2016. Data were extracted on study characteristics and items of the method section of MARS. A cross-classified multilevel model was analyzed, to test whether publication year and journal impact factor (JIF) were associated with the reporting quality scores of articles. Results: Compliance with MARS in the method section was generally inadequate in the random sample of 120 articles. Variation existed in the reporting of items. There were no significant effects of publication year and journal impact factor (JIF) on the reporting quality scores of articles. Conclusions: The reporting quality in the method section of systematic reviews and meta-analyses was still insufficient, therefore we recommend researchers to improve the reporting in their articles by using reporting standards like MARS. PMID:28878704
Analgesic, antibacterial and central nervous system depressant activities of Albizia procera leaves.
Khatoon, Mst Mahfuza; Khatun, Mst Hajera; Islam, Md Ekramul; Parvin, Mst Shahnaj
2014-04-01
To ascertain analgesic, antibacterial and central nervous system (CNS) depressant activities of ethyl acetate, dichloromethane and carbon tetrachloride fractions of methanol extract of Albizia procera (A. procera) leaves. Leaves extracts of A. procera were tested for analgesic activity by acetic acid induced and formalin test method in mice. The in vitro antibacterial activity was performed by agar well diffusion method. CNS depressant activity was evaluated by hole cross and open field tests. All the extracts at 200 mg/kg exhibited significant (P<0.01) analgesic activity in acetic acid induced and formalin tests method in mice. Analgesic activity of the ethyl acetate fraction was almost same like as standard drug indomethacin in acetic acid induced method. The CNS depressant activity of the extracts at 500 mg/kg was comparable to the positive control diazepam as determined by hole cross and open field test method. The extracts exhibited moderate antimicrobial activity against all the tested microorganisms (Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Esherichia coli, Shigella soneii, Shigella boydii) at concentration of 0.8 mg/disc. The measured diameter of zone of inhibition for the extracts was within the range of 7 to 12 mm which was less than the standard kanamycin (16-24 mm). It is concluded that all the extracts possess potential analgesic and CNS depressants activity. This study also showed that different fractions of methanol extract could be potential sources of new antimicrobial agents.
Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D
2010-08-01
Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.
A Field-Based Aquatic Life Benchmark for Conductivity in ...
This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
2016-05-01
A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1
2016-05-01
A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1
METHOD OF TESTING THERMAL NEUTRON FISSIONABLE MATERIAL FOR PURITY
Fermi, E.; Anderson, H.L.
1961-01-24
A process is given for determining the neutronic purity of fissionable material by the so-called shotgun test. The effect of a standard neutron absorber of known characteristics and amounts on a neutronic field also of known characteristics is measured and compared with the effect which the impurities derived from a known quantity of fissionable material has on the same neutronic field. The two readings are then made the basis of calculation from which the amount of impurities can be computed.
Magnetic field mapping of the UCNTau magneto-gravitational trap: design study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libersky, Matthew Murray
2014-09-04
The beta decay lifetime of the free neutron is an important input to the Standard Model of particle physics, but values measured using different methods have exhibited substantial disagreement. The UCN r experiment in development at Los Alamos National Laboratory (LANL) plans to explore better methods of measuring the neutron lifetime using ultracold neutrons (UCNs). In this experiment, UCNs are confined in a magneto-gravitational trap formed by a curved, asymmetric Halbach array placed inside a vacuum vessel and surrounded by holding field coils. If any defects present in the Halbach array are sufficient to reduce the local field near themore » surface below that needed to repel the desired energy level UCNs, loss by material interaction can occur at a rate similar to the loss by beta decay. A map of the magnetic field near the surface of the array is necessary to identify any such defects, but the array's curved geometry and placement in a vacuum vessel make conventional field mapping methods difficult. A system consisting of computer vision-based tracking and a rover holding a Hall probe has been designed to map the field near the surface of the array, and construction of an initial prototype has begun at LANL. The design of the system and initial results will be described here.« less
NASA Astrophysics Data System (ADS)
Yarnykh, V.; Korostyshevskaya, A.
2017-08-01
Macromolecular proton fraction (MPF) is a biophysical parameter describing the amount of macromolecular protons involved into magnetization exchange with water protons in tissues. MPF represents a significant interest as a magnetic resonance imaging (MRI) biomarker of myelin for clinical applications. A recent fast MPF mapping method enabled clinical translation of MPF measurements due to time-efficient acquisition based on the single-point constrained fit algorithm. However, previous MPF mapping applications utilized only 3 Tesla MRI scanners and modified pulse sequences, which are not commonly available. This study aimed to test the feasibility of MPF mapping implementation on a 1.5 Tesla clinical scanner using standard manufacturer’s sequences and compare the performance of this method between 1.5 and 3 Tesla scanners. MPF mapping was implemented on 1.5 and 3 Tesla MRI units of one manufacturer with either optimized custom-written or standard product pulse sequences. Whole-brain three-dimensional MPF maps obtained from a single volunteer were compared between field strengths and implementation options. MPF maps demonstrated similar quality at both field strengths. MPF values in segmented brain tissues and specific anatomic regions appeared in close agreement. This experiment demonstrates the feasibility of fast MPF mapping using standard sequences on 1.5 T and 3 T clinical scanners.
Numerical analysis of whole-body cryotherapy chamber design improvement
NASA Astrophysics Data System (ADS)
Yerezhep, D.; Tukmakova, A. S.; Fomin, V. E.; Masalimov, A.; Asach, A. V.; Novotelnova, A. V.; Baranov, A. Yu
2018-05-01
Whole body cryotherapy is a state-of-the-art method that uses cold for treatment and prevention of diseases. The process implies the impact of cryogenic gas on a human body that implements in a special cryochamber. The temperature field in the chamber is of great importance since local integument over-cooling may occur. Numerical simulation of WBC has been carried out. Chamber design modification has been proposed in order to increase the uniformity of the internal temperature field. The results have been compared with the ones obtained for a standard chamber design. The value of temperature gradient formed in the chamber containing curved wall with certain height has been decreased almost twice in comparison with the results obtained for the standard design. The modification proposed may increase both safety and comfort of cryotherapy.
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Hill, Carrie S.
2013-01-01
Inductive magnetic field probes (also known as B-dot probes and sometimes as B-probes or magnetic probes) are useful for performing measurements in electric space thrusters and various plasma accelerator applications where a time-varying magnetic field is present. Magnetic field probes have proven to be a mainstay in diagnosing plasma thrusters where changes occur rapidly with respect to time, providing the means to measure the magnetic fields produced by time-varying currents and even an indirect measure of the plasma current density through the application of Ampère's law. Examples of applications where this measurement technique has been employed include pulsed plasma thrusters and quasi-steady magnetoplasmadynamic thrusters. The Electric Propulsion Technical Committee (EPTC) of the American Institute of Aeronautics and Astronautics (AIAA) was asked to assemble a Committee on Standards (CoS) for Electric Propulsion Testing. The assembled CoS was tasked with developing Standards and Recommended Practices for various diagnostic techniques used in the evaluation of plasma thrusters. These include measurements that can yield either global information related to a thruster and its performance or detailed, local data related to the specific physical processes occurring in the plasma. This paper presents a summary of the standard, describing the preferred methods for fabrication, calibration, and usage of inductive magnetic field probes for use in diagnosing plasma thrusters. Inductive magnetic field probes (also called B-dot probes throughout this document) are commonly used in electric propulsion (EP) research and testing to measure unsteady magnetic fields produced by time-varying currents. The B-dot probe is relatively simple in construction, and requires minimal cost, making it a low-cost technique that is readily accessible to most researchers. While relatively simple, the design of a B-dot probe is not trivial and there are many opportunities for errors in probe construction, calibration, and usage, and in the post-processing of data that is produced by the probe. There are typically several ways in which each of these steps can be approached, and different applications may require more or less vigorous attention to various issues.
Zuckerwar, Allan J; Herring, G C; Elbing, Brian R
2006-01-01
A free-field (FF) substitution method for calibrating the pressure sensitivity of microphones at frequencies up to 80 kHz is demonstrated with both grazing and normal-incidence geometries. The substitution-based method, as opposed to a simultaneous method, avoids problems associated with the nonuniformity of the sound field and, as applied here, uses a 1/4-in. air-condenser pressure microphone as a known reference. Best results were obtained with a centrifugal fan, which is used as a random, broadband sound source. A broadband source minimizes reflection-related interferences that can plague FF measurements. Calibrations were performed on 1/4-in. FF air-condenser, electret, and microelectromechanical systems (MEMS) microphones in an anechoic chamber. The uncertainty of this FF method is estimated by comparing the pressure sensitivity of an air-condenser FF microphone, as derived from the FF measurement, with that of an electrostatic actuator calibration. The root-mean-square difference is found to be +/- 0.3 dB over the range 1-80 kHz, and the combined standard uncertainty of the FF method, including other significant contributions, is +/- 0.41 dB.
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
Calculation of far-field scattering from nonspherical particles using a geometrical optics approach
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1991-01-01
A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.
NASA Astrophysics Data System (ADS)
Han, Xu; Xie, Guangping; Laflen, Brandon; Jia, Ming; Song, Guiju; Harding, Kevin G.
2015-05-01
In the real application environment of field engineering, a large variety of metrology tools are required by the technician to inspect part profile features. However, some of these tools are burdensome and only address a sole application or measurement. In other cases, standard tools lack the capability of accessing irregular profile features. Customers of field engineering want the next generation metrology devices to have the ability to replace the many current tools with one single device. This paper will describe a method based on the ring optical gage concept to the measurement of numerous kinds of profile features useful for the field technician. The ring optical system is composed of a collimated laser, a conical mirror and a CCD camera. To be useful for a wide range of applications, the ring optical system requires profile feature extraction algorithms and data manipulation directed toward real world applications in field operation. The paper will discuss such practical applications as measuring the non-ideal round hole with both off-centered and oblique axes. The algorithms needed to analyze other features such as measuring the width of gaps, radius of transition fillets, fall of step surfaces, and surface parallelism will also be discussed in this paper. With the assistance of image processing and geometric algorithms, these features can be extracted with a reasonable performance. Tailoring the feature extraction analysis to this specific gage offers the potential for a wider application base beyond simple inner diameter measurements. The paper will present experimental results that are compared with standard gages to prove the performance and feasibility of the analysis in real world field engineering. Potential accuracy improvement methods, a new dual ring design and future work will be discussed at the end of this paper.
Spectral Radiance of a Large-Area Integrating Sphere Source
Walker, James H.; Thompson, Ambler
1995-01-01
The radiance and irradiance calibration of large field-of-view scanning and imaging radiometers for remote sensing and surveillance applications has resulted in the development of novel calibration techniques. One of these techniques is the employment of large-area integrating sphere sources as radiance or irradiance secondary standards. To assist the National Aeronautical and Space Administration’s space based ozone measurement program, a commercially available large-area internally illuminated integrating sphere source’s spectral radiance was characterized in the wavelength region from 230 nm to 400 nm at the National Institute of Standards and Technology. Spectral radiance determinations and spatial mappings of the source indicate that carefully designed large-area integrating sphere sources can be measured with a 1 % to 2 % expanded uncertainty (two standard deviation estimate) in the near ultraviolet with spatial nonuniformities of 0.6 % or smaller across a 20 cm diameter exit aperture. A method is proposed for the calculation of the final radiance uncertainties of the source which includes the field of view of the instrument being calibrated. PMID:29151725
[Research progress on mechanical performance evaluation of artificial intervertebral disc].
Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang
2018-03-01
The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.
NASA Astrophysics Data System (ADS)
Morales, Juan; Goguitchaichvili, Avto; Alva-Valdivia, Luis M.; Urrutia-Fucugauchi, Jaime
2006-06-01
Twenty years after Tanaka and Kono's pioneering contribution (Tanaka and Kono, 1984), we give some new details on the effect of applied field strength during Thellier paleointensity experiments. Special attention is paid to the relation of magnitude of laboratory field and Coe's quality factors (Coe et al., 1978). Full thermoremanent magnetizations were imparted on natural samples containing low-Ti titanomagnetites of pseudo-single domain structure in a 40-μT magnetic field from 600 °C to room temperature. The samples were subjected to the routine Thellier procedure using a wide range of applied laboratory fields. Results indicate that values of laboratory fields may be accurately reproduced within 2% of standard error. The quality factors, however, decrease when the magnitude of 'ancient' field does not match to applied laboratory fields. To cite this article: J. Morales et al., C. R. Geoscience 338 (2006).
Xu, Tielong; Zhong, Daibin; Tang, Linhua; Chang, Xuelian; Fu, Fengyang; Yan, Guiyun; Zheng, Bin
2014-01-28
Insecticide resistance monitoring in malaria mosquitoes is essential for guiding the rational use of insecticides in vector control programs. Resistance bioassay is the first step for insecticide monitoring and it lays an important foundation for molecular examination of resistance mechanisms. In the literature, various mosquito sample collection and preparation methods have been used, but how mosquito sample collection and preparation methods affect insecticide susceptibility bioassay results is largely unknown. The objectives of this study were to determine whether mosquito sample collection and preparation methods affected bioassay results, which may cause incorrect classification of mosquito resistance status. The study was conducted in Anopheles sinensis mosquitoes in two study sites in central China. Three mosquito sample collection and preparation methods were compared for insecticide susceptibility, kdr frequencies and metabolic enzyme activities: 1) adult mosquitoes collected from the field; 2) F1 adults from field collected, blood-fed mosquitoes; and 3) adult mosquitoes reared from field collected larvae. Mosquito sample collection and preparation methods significantly affected mortality rates in the standard WHO tube resistance bioassay. Mortality rate of field-collected female adults was 10-15% higher than in mosquitoes reared from field-collected larvae and F1 adults from field collected blood-fed females. This pattern was consistent in mosquitoes from the two study sites. High kdr mutation frequency (85-95%) with L1014F allele as the predominant mutation was found in our study populations. Field-collected female adults consistently exhibited the highest monooxygenase and GST activities. The higher mortality rate observed in the field-collected female mosquitoes may have been caused by a mixture of mosquitoes of different ages, as older mosquitoes were more susceptible to deltamethrin than younger mosquitoes. Female adults reared from field-collected larvae in resistance bioassays are recommended to minimize the effect of confounding factors such as mosquito age and blood feeding status so that more reliable and reproducible mortality may be obtained.
Horn, Folkert K; Kaltwasser, Christoph; Jünemann, Anselm G; Kremers, Jan; Tornow, Ralf P
2012-04-01
There is evidence that multifocal visual evoked potentials (VEPs) can be used as an objective tool to detect visual field loss. The aim of this study was to correlate multifocal VEP amplitudes with standard perimetry data and retinal nerve fibre layer (RNFL) thickness. Multifocal VEP recordings were performed with a four-channel electrode array using 58 stimulus fields (pattern reversal dartboard). For each field, the recording from the channel with maximal signal-to-noise ratio (SNR) was retained, resulting in an SNR optimised virtual recording. Correlation with RNFL thickness, measured with spectral domain optical coherence tomography and with standard perimetry, was performed for nerve fibre bundle related areas. The mean amplitudes in nerve fibre related areas were smaller in glaucoma patients than in normal subjects. The differences between both groups were most significant in mid-peripheral areas. Amplitudes in these areas were significantly correlated with corresponding RNFL thickness (Spearman R=0.76) and with standard perimetry (R=0.71). The multifocal VEP amplitude was correlated with perimetric visual field data and the RNFL thickness of the corresponding regions. This method of SNR optimisation is useful for extracting data from recordings and may be appropriate for objective assessment of visual function at different locations. This study has been registered at http://www.clinicaltrials.gov (NCT00494923).
Hrovatin, Karin; Kunej, Tanja
2018-01-01
Erstwhile, sex was determined by observation, which is not always feasible. Nowadays, genetic methods are prevailing due to their accuracy, simplicity, low costs, and time-efficiency. However, there is no comprehensive review enabling overview and development of the field. The studies are heterogeneous, lacking a standardized reporting strategy. Therefore, our aim was to collect genetic sexing assays for mammals and assemble them in a catalogue with unified terminology. Publications were extracted from online databases using key words such as sexing and molecular. The collected data were supplemented with species and gene IDs and the type of sex-specific sequence variant (SSSV). We developed a catalogue and graphic presentation of diagnostic tests for molecular sex determination of mammals, based on 58 papers published from 2/1991 to 10/2016. The catalogue consists of five categories: species, genes, SSSVs, methods, and references. Based on the analysis of published literature, we propose minimal requirements for reporting, consisting of: species scientific name and ID, genetic sequence with name and ID, SSSV, methodology, genomic coordinates (e.g., restriction sites, SSSVs), amplification system, and description of detected amplicon and controls. The present study summarizes vast knowledge that has up to now been scattered across databases, representing the first step toward standardization regarding molecular sexing, enabling a better overview of existing tests and facilitating planned designs of novel tests. The project is ongoing; collecting additional publications, optimizing field development, and standardizing data presentation are needed.
NASA Astrophysics Data System (ADS)
Kazantseva, L.
2011-09-01
The collection of photographic images of Kiev University Observatory covers a period of almost a hundred years and it is interesting from scientific and historical point of view. The study of contemporary techniques of such observations, processing of negatives, creating of copies of them, a photometric standards using various photographic emulsions and photographic materials in combination with preserved photographic techniques and astronomical instruments (from telescopes unique home made photometer to cassettes) - reflect the age-old history of photographic field of astronomy. For the first, celestial objects, astronomical events, star fields, recorded on such a long time interval have a valuable information. For the second, complete restoration of information causes many difficulties. Even with well-preserved emulsion for a hundred years, the standards for description of photographs repeatedly were changing; not all magazines of observations are preserved; sometimes it is not possible to install a toll, which held up. Therefore phase of systematization and cataloguing the collection is very important and quite difficult. Conduction of observations in expedition conditions with various instruments requires a comparative assessment of their accuracy. This division performed on a series of collections, identified photos, and selected certain standards, scanned images of each series by the standard method compared with atalogue information. In the future such work will enable a quick search and use images in conjunction with other than the object coordinates, date, method of observation, and for astrometry and photometric accuracy.
Xiao, Xiang; Wang, Tianping; Ye, Hongzhuan; Qiang, Guangxiang; Wei, Haiming; Tian, Zhigang
2005-01-01
OBJECTIVE: To determine the validity of a recently developed rapid test--a colloidal dye immunofiltration assay (CDIFA)--used by health workers in field settings to identify villagers infected with Schistosoma japonicum. METHODS: Health workers in the field used CDIFA to test samples from 1553 villagers in two areas of low endemicity and an area where S. japonicum was not endemic in Anhui, China. All the samples were then tested in the laboratory by laboratory staff using a standard parasitological method (Kato-Katz), an indirect haemagglutination assay (IHA), and CDIFA. The results of CDIFA performed by health workers were compared with those obtained by Kato-Katz and IHA. FINDINGS: Concordance between the results of CDIFA performed in field settings and in the laboratory was high (kappa index, 0.95; 95% confidence interval, 0.93-0.97). When Kato-Katz was used as the reference test, the overall sensitivity and specificity of CDIFA were 98.5% and 83.6%, respectively in the two villages in areas of low endemicity, while the specificity was 99.8% in the nonendemic village. Compared with IHA, the overall specificity and sensitivity of CDIFA were greater than 99% and 96%, respectively. With the combination of Kato-Katz and IHA as the reference standard, CDIFA had a sensitivity of 95.8% and a specificity of 99.5%, and an accuracy of 98.6% in the two areas of low endemicity. CONCLUSION: CDIFA is a specific, sensitive, and reliable test that can be used for rapid screening for schistosomiasis by health workers in field settings. PMID:16175827
NASA Astrophysics Data System (ADS)
Eppeldauer, G. P.; Podobedov, V. B.; Cooksey, C. C.
2017-05-01
Calibration of the emitted radiation from UV sources peaking at 365 nm, is necessary to perform the ASTM required 1 mW/cm2 minimum irradiance in certain military material (ships, airplanes etc) tests. These UV "black lights" are applied for crack-recognition using fluorescent liquid penetrant inspection. At present, these nondestructive tests are performed using Hg-lamps. Lack of a proper standard and the different spectral responsivities of the available UV meters cause significant measurement errors even if the same UV-365 source is measured. A pyroelectric radiometer standard with spectrally flat (constant) response in the UV-VIS range has been developed to solve the problem. The response curve of this standard determined from spectral reflectance measurement, is converted into spectral irradiance responsivity with <0.5% (k=2) uncertainty as a result of using an absolute tie point from a Si-trap detector traceable to the primary standard cryogenic radiometer. The flat pyroelectric radiometer standard can be used to perform uniform integrated irradiance measurements from all kinds of UV sources (with different peaks and distributions) without using any source standard. Using this broadband calibration method, yearly spectral calibrations for the reference UV (LED) sources and irradiance meters is not needed. Field UV sources and meters can be calibrated against the pyroelectric radiometer standard for broadband (integrated) irradiance and integrated responsivity. Using the broadband measurement procedure, the UV measurements give uniform results with significantly decreased uncertainties.
Applications of numerical methods to simulate the movement of contaminants in groundwater.
Sun, N Z
1989-01-01
This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Effective field renormalization group approach for Ising lattice spin systems
NASA Astrophysics Data System (ADS)
Fittipaldi, Ivon P.
1994-03-01
A new applicable real-space renormalization group framework (EFRG) for computing the critical properties of Ising lattice spin systems is presented. The method, which follows up the same strategy of the mean-field renormalization group scheme (MFRG), is based on rigorous Ising spin identities and utilizes a convenient differential operator expansion technique. Within this scheme, in contrast with the usual mean-field type of equation of state, all the relevant self-spin correlations are taken exactly into account. The results for the critical coupling and the critical exponent v, for the correlation length, are very satisfactory and it is shown that this technique leads to rather accurate results which represent a remarkable improvement on those obtained from the standard MFRG method. In particular, it is shown that the present EFRG approach correctly distinguishes the geometry of the lattice structure even when employing its simplest size-cluster version. Owing to its simplicity we also comment on the wide applicability of the present method to problems in crystalline and disordered Ising spin systems.
A study of radar cross section measurement techniques
NASA Technical Reports Server (NTRS)
Mcdonald, Malcolm W.
1986-01-01
Past, present, and proposed future technologies for the measurement of radar cross section were studied. The purpose was to determine which method(s) could most advantageously be implemented in the large microwave anechoic chamber facility which is operated at the antenna test range site. The progression toward performing radar cross section measurements of space vehicles with which the Orbital Maneuvering Vehicle will be called upon to rendezvous and dock is a natural outgrowth of previous work conducted in recent years of developing a high accuracy range and velocity sensing radar system. The radar system was designed to support the rendezvous and docking of the Orbital Maneuvering Vehicle with various other space vehicles. The measurement of radar cross sections of space vehicles will be necessary in order to plan properly for Orbital Maneuvering Vehicle rendezvous and docking assignments. The methods which were studied include: standard far-field measurements; reflector-type compact range measurements; lens-type compact range measurement; near field/far field transformations; and computer predictive modeling. The feasibility of each approach is examined.
Construction of 144, 565 keV and 5.0 MeV monoenergetic neutron calibration fields at JAERI.
Tanimura, Y; Yoshizawa, M; Saegusa, J; Fujii, K; Shimizu, S; Yoshida, M; Shibata, Y; Uritani, A; Kudo, K
2004-01-01
Monoenergetic neutron calibration fields of 144, 565 keV and 5.0 MeV have been developed at the Facility of Radiation Standards of JAERI using a 4 MV Pelletron accelerator. The 7Li(p,n)7Be and 2H(d,n)3He reactions are employed for neutron production. The neutron energy was measured by the time-of-flight method with a liquid scintillation detector and calculated with the MCNP-ANT code. A long counter is employed as a neutron monitor because of the flat response. The monitor is set up where the influence of inscattered neutrons from devices and their supporting materials at a calibration point is as small as possible. The calibration coefficients from the monitor counts to the neutron fluence at a calibration point were obtained from the reference fluence measured with the transfer instrument of the primary standard laboratory (AIST), a 24.13 cm phi Bonner sphere counter. The traceability of the fields to AIST was established through the calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain S.; Wray, Craig P.; Guillot, Cyril
2003-08-01
In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less
Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea
2016-03-26
Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients' care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine.
Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea
2016-01-01
Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients’ care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine. PMID:27019800
Future Direction of IMIA Standardization
Kimura, M.; Ogishima, S.; Shabo, A.; Kim, I. K.; Parisot, C.; de Faria Leao, B.
2014-01-01
Summary Objectives Standardization in the field of health informatics has increased its importance and global alliance for establishing interoperability and compatibility internationally. Standardization has been organized by standard development organizations (SDOs) such as ISO (International Organization for Standardization), CEN (European Committee for Standardization), IHE (Integrating the Healthcare Enterprise), and HL7 (Health Level 7), etc. This paper reports the status of these SDOs’ activities. Methods In this workshop, we reviewed the past activities and the current situation of standardization in health care informatics with the standard development organizations such as ISO, CEN, IHE, and HL7. Then we discussed the future direction of standardization in health informatics toward “future medicine” based on standardized technologies. Results We could share the status of each SDO through exchange of opinions in the workshop. Some WHO members joined our discussion to support this constructive activity. Conclusion At this meeting, the workshop speakers have been appointed as new members of the IMIA working groups of Standards in Health Care Informatics (WG16). We could reach to the conclusion that we collaborate for the international standardization in health informatics toward “future medicine”. PMID:25123729
Block Copolymers as Templates for Arrays of Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Bronikowski, Michael; Hunt, Brian
2003-01-01
A method of manufacturing regular arrays of precisely sized, shaped, positioned, and oriented carbon nanotubes has been proposed. Arrays of carbon nanotubes could prove useful in such diverse applications as communications (especially for filtering of signals), biotechnology (for sequencing of DNA and separation of chemicals), and micro- and nanoelectronics (as field emitters and as signal transducers and processors). The method is expected to be suitable for implementation in standard semiconductor-device fabrication facilities.
Analgesic, antibacterial and central nervous system depressant activities of Albizia procera leaves
Khatoon, Mst. Mahfuza; Khatun, Mst. Hajera; Islam, Md. Ekramul; Parvin, Mst. Shahnaj
2014-01-01
Objective To ascertain analgesic, antibacterial and central nervous system (CNS) depressant activities of ethyl acetate, dichloromethane and carbon tetrachloride fractions of methanol extract of Albizia procera (A. procera) leaves. Methods Leaves extracts of A. procera were tested for analgesic activity by acetic acid induced and formalin test method in mice. The in vitro antibacterial activity was performed by agar well diffusion method. CNS depressant activity was evaluated by hole cross and open field tests. Results All the extracts at 200 mg/kg exhibited significant (P<0.01) analgesic activity in acetic acid induced and formalin tests method in mice. Analgesic activity of the ethyl acetate fraction was almost same like as standard drug indomethacin in acetic acid induced method. The CNS depressant activity of the extracts at 500 mg/kg was comparable to the positive control diazepam as determined by hole cross and open field test method. The extracts exhibited moderate antimicrobial activity against all the tested microorganisms (Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Esherichia coli, Shigella soneii, Shigella boydii) at concentration of 0.8 mg/disc. The measured diameter of zone of inhibition for the extracts was within the range of 7 to 12 mm which was less than the standard kanamycin (16-24 mm). Conclusions It is concluded that all the extracts possess potential analgesic and CNS depressants activity. This study also showed that different fractions of methanol extract could be potential sources of new antimicrobial agents. PMID:25182551
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
About non standard Lagrangians in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrijevic, Dragoljub D.; Milosevic, Milan
A review of non standard Lagrangians present in modern cosmological models will be considered. Well known example of non standard Lagrangian is Dirac-Born-Infeld (DBI) type Lagrangian for tachyon field. Another type of non standard Lagrangian under consideration contains scalar field which describes open p-adic string tachyon and is called p-adic string theory Lagrangian. We will investigate homogenous cases of both DBI and p-adic fields and obtain Lagrangians of the standard type which have the same equations of motions as aforementioned non standard one.
Passive field reflectance measurements
NASA Astrophysics Data System (ADS)
Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian
2008-10-01
The results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference are presented. Comparative operation between the traditional method that uses downward-looking field and reference white panel measurements and the new approach involving duplicated downward- and upward-looking spectral channels (each latter one with its own diffuser) is analyzed. The results indicate that the latter method performs in very good agreement with the standard method and is more suitable for passive sensors under rapidly changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronous recording of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allows a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the normalized difference vegetation index (NDVI) corresponding to the period 2004-2007 field experiments concerning weed detection in soybean stubbles and fertilizer level assessment in wheat. The method may be used to refine sensor-based nitrogen fertilizer rate recommendations and to determine suitable zones for herbicide applications.
Simonella, Lucio E; Gaiero, Diego M; Palomeque, Miriam E
2014-10-01
Iron is an essential micronutrient for phytoplankton growth and is supplied to the remote areas of the ocean mainly through atmospheric dust/ash. The amount of soluble Fe in dust/ash is a major source of uncertainty in modeling-Fe dissolution and deposition to the surface ocean. Currently in the literature, there exist almost as many different methods to estimate fractional solubility as researchers in the field, making it difficult to compare results between research groups. Also, an important constraint to evaluate Fe solubility in atmospheric dust is the limited mass of sample which is usually only available in micrograms to milligrams amounts. A continuous flow (CF) method that can be run with low mass of sediments (<10mg) was tested against a standard method which require about 1g of sediments (BCR of the European Union). For validation of the CF experiment, we run both methods using South American surface sediment and deposited volcanic ash. Both materials tested are easy eroded by wind and are representative of atmospheric dust/ash exported from this region. The uncertainty of the CF method was obtained from seven replicates of one surface sediment sample, and shows very good reproducibility. The replication was conducted on different days in a span of two years and ranged between 8 and 22% (i.e., the uncertainty for the standard method was 6-19%). Compared to other standardized methods, the CF method allows studies of dissolution kinetic of metals and consumes less reagents and time (<3h). The method validated here is suggested to be used as a standardized method for Fe solubility studies on dust/ash. Copyright © 2014 Elsevier B.V. All rights reserved.
Ashraf, Sania; Kao, Angie; Hugo, Cecilia; Christophel, Eva M; Fatunmbi, Bayo; Luchavez, Jennifer; Lilley, Ken; Bell, David
2012-10-24
Malaria diagnosis has received renewed interest in recent years, associated with the increasing accessibility of accurate diagnosis through the introduction of rapid diagnostic tests and new World Health Organization guidelines recommending parasite-based diagnosis prior to anti-malarial therapy. However, light microscopy, established over 100 years ago and frequently considered the reference standard for clinical diagnosis, has been neglected in control programmes and in the malaria literature and evidence suggests field standards are commonly poor. Microscopy remains the most accessible method for parasite quantitation, for drug efficacy monitoring, and as a reference of assessing other diagnostic tools. This mismatch between quality and need highlights the importance of the establishment of reliable standards and procedures for assessing and assuring quality. This paper describes the development, function and impact of a multi-country microscopy external quality assurance network set up for this purpose in Asia. Surveys were used for key informants and past participants for feedback on the quality assurance programme. Competency scores for each country from 14 participating countries were compiled for analyses using paired sample t-tests. In-depth interviews were conducted with key informants including the programme facilitators and national level microscopists. External assessments and limited retraining through a formalized programme based on a reference slide bank has demonstrated an increase in standards of competence of senior microscopists over a relatively short period of time, at a potentially sustainable cost. The network involved in the programme now exceeds 14 countries in the Asia-Pacific, and the methods are extended to other regions. While the impact on national programmes varies, it has translated in some instances into a strengthening of national microscopy standards and offers a possibility both for supporting revival of national microcopy programmes, and for the development of globally recognized standards of competency needed both for patient management and field research.
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
NASA Astrophysics Data System (ADS)
Jin, Huang; Ling, Lin; Jun, Guo; Jianguo, Li; Yongzhong, Wang
2017-11-01
Facing the increasingly severe situation of air pollution, China are now positively promoting the evaluation of high efficiency air pollution control equipments and the research of the relative national standards. This paper showed the significance and the effect of formulating the technical requirements of high efficiency precipitator equipments for assessment national standards in power industries as well as the research thoughts and principle of these standards. It introduce the qualitative and quantitative evaluation requirements of high efficiency precipitators using in power industries and the core technical content such as testing, calculating, evaluation methods and so on. The implementation of a series of national standards is in order to lead and promote the production and application of high efficiency precipitator equipments in the field of the prevention of air pollution in national power industries.
The International Standard for Anti-Brucella abortus Serum
Stableforth, A. W.
1954-01-01
In field trials on the eradication of brucellosis from dairy herds in Great Britain, which began in 1933, a serum standard of reference was used for the examination of agglutinating suspensions prepared in different laboratories. In 1937, the Office International des Epizooties (OIE) adopted this standard and made recommendations for its use internationally. These recommendations were revised by OIE in 1948, by the Third Inter-American Congress on Brucellosis and by the Joint FAO/WHO Expert Panel on Brucellosis in 1950, and again by the latter body in 1952. A new batch equivalent in potency to the original standard was established by the WHO Expert Committee on Biological Standardization in 1952 as the International Standard for Anti-Brucella abortus Serum. The International Standard, or a national standard of equivalent potency, ensures comparability of the titres obtained in different countries by different methods, and the results of such comparisons can be expressed in a simple manner by describing the titres in terms of International Units of Brucella antibody. PMID:13199656
Liu, Haiyi; Sun, Jianfei; Wang, Haoyao; Wang, Peng; Song, Lina; Li, Yang; Chen, Bo; Zhang, Yu; Gu, Ning
2015-06-08
A kinetics-based method is proposed to quantitatively characterize the collective magnetization of colloidal magnetic nanoparticles. The method is based on the relationship between the magnetic force on a colloidal droplet and the movement of the droplet under a gradient magnetic field. Through computational analysis of the kinetic parameters, such as displacement, velocity, and acceleration, the magnetization of colloidal magnetic nanoparticles can be calculated. In our experiments, the values measured by using our method exhibited a better linear correlation with magnetothermal heating, than those obtained by using a vibrating sample magnetometer and magnetic balance. This finding indicates that this method may be more suitable to evaluate the collective magnetism of colloidal magnetic nanoparticles under low magnetic fields than the commonly used methods. Accurate evaluation of the magnetic properties of colloidal nanoparticles is of great importance for the standardization of magnetic nanomaterials and for their practical application in biomedicine. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Early detection of glaucoma by means of a novel 3D computer‐automated visual field test
Nazemi, Paul P; Fink, Wolfgang; Sadun, Alfredo A; Francis, Brian; Minckler, Donald
2007-01-01
Purpose A recently devised 3D computer‐automated threshold Amsler grid test was used to identify early and distinctive defects in people with suspected glaucoma. Further, the location, shape and depth of these field defects were characterised. Finally, the visual fields were compared with those obtained by standard automated perimetry. Patients and methods Glaucoma suspects were defined as those having elevated intraocular pressure (>21 mm Hg) or cup‐to‐disc ratio of >0.5. 33 patients and 66 eyes with risk factors for glaucoma were examined. 15 patients and 23 eyes with no risk factors were tested as controls. The recently developed 3D computer‐automated threshold Amsler grid test was used. The test exhibits a grid on a computer screen at a preselected greyscale and angular resolution, and allows patients to trace those areas on the grid that are missing in their visual field using a touch screen. The 5‐minute test required that the patients repeatedly outline scotomas on a touch screen with varied displays of contrast while maintaining their gaze on a central fixation marker. A 3D depiction of the visual field defects was then obtained that was further characterised by the location, shape and depth of the scotomas. The exam was repeated three times per eye. The results were compared to Humphrey visual field tests (ie, achromatic standard or SITA standard 30‐2 or 24‐2). Results In this pilot study 79% of the eyes tested in the glaucoma‐suspect group repeatedly demonstrated visual field loss with the 3D perimetry. The 3D depictions of visual field loss associated with these risk factors were all characteristic of or compatible with glaucoma. 71% of the eyes demonstrated arcuate defects or a nasal step. Constricted visual fields were shown in 29% of the eyes. No visual field changes were detected in the control group. Conclusions The 3D computer‐automated threshold Amsler grid test may demonstrate visual field abnormalities characteristic of glaucoma in glaucoma suspects with normal achromatic Humphrey visual field testing. This test may be used as a screening tool for the early detection of glaucoma. PMID:17504855
2014-01-01
Background The main challenge in the context of health care reforms and priority setting is the establishment and/or maintenance of fairness and standard of care. For the political process and interdisciplinary discussion, the subjective perception of the health care system might even be as important as potential objective criteria. Of special interest are the perceptions of academic disciplines, whose representatives act as decision makers in the health care sector. The aim of this study is to explore and compare the subjective perception of fairness and standard of care in the German health care system among students of medicine, law, economics, philosophy, and religion. Methods Between October 2011 and January 2012, we asked freshmen and advanced students of the fields mentioned above to participate in a paper and pencil survey. Prior to this, we formulated hypotheses. The data were analysed by micro econometric regression techniques. Results Data from 1,088 students were included in the study. Medical students, freshmen, and advanced students perceive the standard of care significantly as being better than non-medical students. Differences in the perception of fairness are not significant between the freshmen of the academic disciplines; however, they increase with the number of study terms. Besides the field of study, further variables such as gender and health status have a significant impact on perceptions. Conclusions Our results show that there are differences in the perception of fairness and standard of care between academic disciplines, which might influence the interdisciplinary discussion on health care reforms and priority setting. PMID:24725356
Bashir, Adil; Gropler, Robert; Ackerman, Joseph
2015-01-01
Purpose Absolute concentrations of high-energy phosphorus (31P) metabolites in liver provide more important insight into physiologic status of liver disease compared to resonance integral ratios. A simple method for measuring absolute concentrations of 31P metabolites in human liver is described. The approach uses surface spoiling inhomogeneous magnetic field gradient to select signal from liver tissue. The technique avoids issues caused by respiratory motion, chemical shift dispersion associated with linear magnetic field gradients, and increased tissue heat deposition due to radiofrequency absorption, especially at high field strength. Methods A method to localize signal from liver was demonstrated using superficial and highly non-uniform magnetic field gradients, which eliminate signal(s) from surface tissue(s) located between the liver and RF coil. A double standard method was implemented to determine absolute 31P metabolite concentrations in vivo. 8 healthy individuals were examined in a 3 T MR scanner. Results Concentrations of metabolites measured in eight healthy individuals are: γ-adenosine triphosphate (ATP) = 2.44 ± 0.21 (mean ± sd) mmol/l of wet tissue volume, α-ATP = 3.2 ± 0.63 mmol/l, β-ATP = 2.98 ± 0.45 mmol/l, inorganic phosphates (Pi) = 1.87 ± 0.25 mmol/l, phosphodiesters (PDE) = 10.62 ± 2.20 mmol/l and phosphomonoesters (PME) = 2.12 ± 0.51 mmol/l. All are in good agreement with literature values. Conclusions The technique offers robust and fast means to localize signal from liver tissue, allows absolute metabolite concentration determination, and avoids problems associated with constant field gradient (linear field variation) localization methods. PMID:26633549
Finding the Hook: Computer Science Education in Elementary Contexts
ERIC Educational Resources Information Center
Ozturk, Zehra; Dooley, Caitlin McMunn; Welch, Meghan
2018-01-01
The purpose of this study was to investigate how elementary teachers with little knowledge of computer science (CS) and project-based learning (PBL) experienced integrating CS through PBL as a part of a standards-based elementary curriculum in Grades 3-5. The researchers used qualitative constant comparison methods on field notes and reflections…
CTEPP STANDARD OPERATING PROCEDURE FOR HANDLING MISSING SAMPLES AND DATA (SOP-2.24)
This SOP describes the method for handling missing samples or data. Missing samples or data will be identified as soon as possible during field sampling. It provides guidance to collect the missing sample or data and document the reason for the missing sample or data.
USDA-ARS?s Scientific Manuscript database
Research is needed over a wide geographic range of soil and weather scenarios to evaluate methods and tools for corn N fertilizer applications. The objectives of this research were to conduct standardized corn N rate response field studies to evaluate the performance of multiple public-domain N deci...
USDA-ARS?s Scientific Manuscript database
A scalable and modular LED illumination dome for microscopic scientific photography is described and illustrated, and methods for constructing such a dome are detailed. Dome illumination for insect specimens has become standard practice across the field of insect systematics, but many dome designs ...
A Standardized Mean Difference Effect Size for Multiple Baseline Designs across Individuals
ERIC Educational Resources Information Center
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R.
2013-01-01
Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and…
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Electric-magnetic dualities in non-abelian and non-commutative gauge theories
NASA Astrophysics Data System (ADS)
Ho, Jun-Kai; Ma, Chen-Te
2016-08-01
Electric-magnetic dualities are equivalence between strong and weak coupling constants. A standard example is the exchange of electric and magnetic fields in an abelian gauge theory. We show three methods to perform electric-magnetic dualities in the case of the non-commutative U (1) gauge theory. The first method is to use covariant field strengths to be the electric and magnetic fields. We find an invariant form of an equation of motion after performing the electric-magnetic duality. The second method is to use the Seiberg-Witten map to rewrite the non-commutative U (1) gauge theory in terms of abelian field strength. The third method is to use the large Neveu Schwarz-Neveu Schwarz (NS-NS) background limit (non-commutativity parameter only has one degree of freedom) to consider the non-commutative U (1) gauge theory or D3-brane. In this limit, we introduce or dualize a new one-form gauge potential to get a D3-brane in a large Ramond-Ramond (R-R) background via field redefinition. We also use perturbation to study the equivalence between two D3-brane theories. Comparison of these methods in the non-commutative U (1) gauge theory gives different physical implications. The comparison reflects the differences between the non-abelian and non-commutative gauge theories in the electric-magnetic dualities. For a complete study, we also extend our studies to the simplest abelian and non-abelian p-form gauge theories, and a non-commutative theory with the non-abelian structure.
NASA Astrophysics Data System (ADS)
Gladkov, Svyatoslav; Kochmann, Julian; Reese, Stefanie; Hütter, Markus; Svendsen, Bob
2016-04-01
The purpose of the current work is the comparison of thermodynamic model formulations for chemically and structurally inhomogeneous solids at finite deformation based on "standard" non-equilibrium thermodynamics [SNET: e. g. S. de Groot and P. Mazur, Non-equilibrium Thermodynamics, North Holland, 1962] and the general equation for non-equilibrium reversible-irreversible coupling (GENERIC) [H. C. Öttinger, Beyond Equilibrium Thermodynamics, Wiley Interscience, 2005]. In the process, non-isothermal generalizations of standard isothermal conservative [e. g. J. W. Cahn and J. E. Hilliard, Free energy of a non-uniform system. I. Interfacial energy. J. Chem. Phys. 28 (1958), 258-267] and non-conservative [e. g. S. M. Allen and J. W. Cahn, A macroscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall. 27 (1979), 1085-1095; A. G. Khachaturyan, Theory of Structural Transformations in Solids, Wiley, New York, 1983] diffuse interface or "phase-field" models [e. g. P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena, Rev. Modern Phys. 49 (1977), 435-479; N. Provatas and K. Elder, Phase Field Methods in Material Science and Engineering, Wiley-VCH, 2010.] for solids are obtained. The current treatment is consistent with, and includes, previous works [e. g. O. Penrose and P. C. Fife, Thermodynamically consistent models of phase-field type for the kinetics of phase transitions, Phys. D 43 (1990), 44-62; O. Penrose and P. C. Fife, On the relation between the standard phase-field model and a "thermodynamically consistent" phase-field model. Phys. D 69 (1993), 107-113] on non-isothermal systems as a special case. In the context of no-flux boundary conditions, the SNET- and GENERIC-based approaches are shown to be completely consistent with each other and result in equivalent temperature evolution relations.
Lee, Won Hee; Lisanby, Sarah H.; Laine, Andrew F.; Peterchev, Angel V.
2017-01-01
Background This study examines the strength and spatial distribution of the electric field induced in the brain by electroconvulsive therapy (ECT) and magnetic seizure therapy (MST). Methods The electric field induced by standard (bilateral, right unilateral, and bifrontal) and experimental (focal electrically administered seizure therapy and frontomedial) ECT electrode configurations as well as a circular MST coil configuration was simulated in an anatomically realistic finite element model of the human head. Maps of the electric field strength relative to an estimated neural activation threshold were used to evaluate the stimulation strength and focality in specific brain regions of interest for these ECT and MST paradigms and various stimulus current amplitudes. Results The standard ECT configurations and current amplitude of 800–900 mA produced the strongest overall stimulation with median of 1.8–2.9 times neural activation threshold and more than 94% of the brain volume stimulated at suprathreshold level. All standard ECT electrode placements exposed the hippocampi to suprathreshold electric field, although there were differences across modalities with bilateral and right unilateral producing respectively the strongest and weakest hippocampal stimulation. MST stimulation is up to 9 times weaker compared to conventional ECT, resulting in direct activation of only 21% of the brain. Reducing the stimulus current amplitude can make ECT as focal as MST. Conclusions The relative differences in electric field strength may be a contributing factor for the cognitive sparing observed with right unilateral compared to bilateral ECT, and MST compared to right unilateral ECT. These simulations could help understand the mechanisms of seizure therapies and develop interventions with superior risk/benefit ratio. PMID:27318858
Lattice field theory applications in high energy physics
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2016-10-01
Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.
Heinrich, Andreas; Teichgräber, Ulf K; Güttler, Felix V
2015-12-01
The standard ASTM F2119 describes a test method for measuring the size of a susceptibility artifact based on the example of a passive implant. A pixel in an image is considered to be a part of an image artifact if the intensity is changed by at least 30% in the presence of a test object, compared to a reference image in which the test object is absent (reference value). The aim of this paper is to simplify and accelerate the test method using a histogram-based reference value. Four test objects were scanned parallel and perpendicular to the main magnetic field, and the largest susceptibility artifacts were measured using two methods of reference value determination (reference image-based and histogram-based reference value). The results between both methods were compared using the Mann-Whitney U-test. The difference between both reference values was 42.35 ± 23.66. The difference of artifact size was 0.64 ± 0.69 mm. The artifact sizes of both methods did not show significant differences; the p-value of the Mann-Whitney U-test was between 0.710 and 0.521. A standard-conform method for a rapid, objective, and reproducible evaluation of susceptibility artifacts could be implemented. The result of the histogram-based method does not significantly differ from the ASTM-conform method.
Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan
2013-11-01
A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.
NASA Astrophysics Data System (ADS)
Khoudeir, A.; Montemayor, R.; Urrutia, Luis F.
2008-09-01
Using the parent Lagrangian method together with a dimensional reduction from D to (D-1) dimensions, we construct dual theories for massive spin two fields in arbitrary dimensions in terms of a mixed symmetry tensor TA[A1A2…AD-2]. Our starting point is the well-studied massless parent action in dimension D. The resulting massive Stueckelberg-like parent actions in (D-1) dimensions inherit all the gauge symmetries of the original massless action and can be gauge fixed in two alternative ways, yielding the possibility of having a parent action with either a symmetric or a nonsymmetric Fierz-Pauli field eAB. Even though the dual sector in terms of the standard spin two field includes only the symmetrical part e{AB} in both cases, these two possibilities yield different results in terms of the alternative dual field TA[A1A2…AD-2]. In particular, the nonsymmetric case reproduces the Freund-Curtright action as the dual to the massive spin two field action in four dimensions.
Apparatus for improving performance of electrical insulating structures
Wilson, Michael J.; Goerz, David A.
2004-08-31
Removing the electrical field from the internal volume of high-voltage structures; e.g., bushings, connectors, capacitors, and cables. The electrical field is removed from inherently weak regions of the interconnect, such as between the center conductor and the solid dielectric, and places it in the primary insulation. This is accomplished by providing a conductive surface on the inside surface of the principal solid dielectric insulator surrounding the center conductor and connects the center conductor to this conductive surface. The advantage of removing the electric fields from the weaker dielectric region to a stronger area improves reliability, increases component life and operating levels, reduces noise and losses, and allows for a smaller compact design. This electric field control approach is currently possible on many existing products at a modest cost. Several techniques are available to provide the level of electric field control needed. Choosing the optimum technique depends on material, size, and surface accessibility. The simplest deposition method uses a standard electroless plating technique, but other metalization techniques include vapor and energetic deposition, plasma spraying, conductive painting, and other controlled coating methods.
Apparatus for improving performance of electrical insulating structures
Wilson, Michael J.; Goerz, David A.
2002-01-01
Removing the electrical field from the internal volume of high-voltage structures; e.g., bushings, connectors, capacitors, and cables. The electrical field is removed from inherently weak regions of the interconnect, such as between the center conductor and the solid dielectric, and places it in the primary insulation. This is accomplished by providing a conductive surface on the inside surface of the principal solid dielectric insulator surrounding the center conductor and connects the center conductor to this conductive surface. The advantage of removing the electric fields from the weaker dielectric region to a stronger area improves reliability, increases component life and operating levels, reduces noise and losses, and allows for a smaller compact design. This electric field control approach is currently possible on many existing products at a modest cost. Several techniques are available to provide the level of electric field control needed. Choosing the optimum technique depends on material, size, and surface accessibility. The simplest deposition method uses a standard electroless plating technique, but other metalization techniques include vapor and energetic deposition, plasma spraying, conductive painting, and other controlled coating methods.
Measurement of eddy-current distribution in the vacuum vessel of the Sino-UNIted Spherical Tokamak.
Li, G; Tan, Y; Liu, Y Q
2015-08-01
Eddy currents have an important effect on tokamak plasma equilibrium and control of magneto hydrodynamic activity. The vacuum vessel of the Sino-UNIted Spherical Tokamak is separated into two hemispherical sections by a toroidal insulating barrier. Consequently, the characteristics of eddy currents are more complex than those found in a standard tokamak. Thus, it is necessary to measure and analyze the eddy-current distribution. In this study, we propose an experimental method for measuring the eddy-current distribution in a vacuum vessel. By placing a flexible printed circuit board with magnetic probes onto the external surface of the vacuum vessel to measure the magnetic field parallel to the surface and then subtracting the magnetic field generated by the vertical-field coils, the magnetic field due to the eddy current can be obtained, and its distribution can be determined. We successfully applied this method to the Sino-UNIted Spherical Tokamak, and thus, we obtained the eddy-current distribution despite the presence of the magnetic field generated by the external coils.
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
Field operating experience in locating and re-recovering landslide-damaged oil wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, R.J.
1974-01-01
Landslides have damaged 65 oil wells on Getty Oil Co.'s leases in the Ventura Avenue field. During a landslide, some wells may remain connected to the surface while other wells may be buried. Well damage ranges from slight bending to complete severing of all casing strings, and depth of damage varies from 15 to 120 ft. Two major problems have been encountered when repair work is planned for a landslide damaged well. The first problem is to locate the undamaged well casing below the landslide. The second problem is to recover the well and replace the damaged casing. Some methodsmore » used by Getty Oil Co. to locate the undamaged portion of a well are conventional surveys, dipneedle surveys, magnetometer surveys, kink-meter surveys, and test holes. Three methods have been used to recover landslide damaged wells in the Ventura Avenue field. The simplest method is an open excavation made with standard earthmoving equipment. This method is limited to shallow depths and locations where the landslide would not be reactivated. To reach greater depths, special methods such as hand-dug or machine-dug shafts must be used. All 3 methods have been used successfully by Getty Oil Co.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-12-15
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less
Consistent use of the standard model effective potential.
Andreassen, Anders; Frost, William; Schwartz, Matthew D
2014-12-12
The stability of the standard model is determined by the true minimum of the effective Higgs potential. We show that the potential at its minimum when computed by the traditional method is strongly dependent on the gauge parameter. It moreover depends on the scale where the potential is calculated. We provide a consistent method for determining absolute stability independent of both gauge and calculation scale, order by order in perturbation theory. This leads to a revised stability bounds m(h)(pole)>(129.4±2.3) GeV and m(t)(pole)<(171.2±0.3) GeV. We also show how to evaluate the effect of new physics on the stability bound without resorting to unphysical field values.
Best practice in forensic entomology--standards and guidelines.
Amendt, Jens; Campobasso, Carlo P; Gaudry, Emmanuel; Reiter, Christian; LeBlanc, Hélène N; Hall, Martin J R
2007-03-01
Forensic entomology, the use of insects and other arthropods in forensic investigations, is becoming increasingly more important in such investigations. To ensure its optimal use by a diverse group of professionals including pathologists, entomologists and police officers, a common frame of guidelines and standards is essential. Therefore, the European Association for Forensic Entomology has developed a protocol document for best practice in forensic entomology, which includes an overview of equipment used for collection of entomological evidence and a detailed description of the methods applied. Together with the definitions of key terms and a short introduction to the most important methods for the estimation of the minimum postmortem interval, the present paper aims to encourage a high level of competency in the field of forensic entomology.
Steinbaum, Lauren; Kwong, Laura H; Ercumen, Ayse; Negash, Makeda S; Lovely, Amira J; Njenga, Sammy M; Boehm, Alexandria B; Pickering, Amy J; Nelson, Kara L
2017-04-01
Globally, about 1.5 billion people are infected with at least one species of soil-transmitted helminth (STH). Soil is a critical environmental reservoir of STH, yet there is no standard method for detecting STH eggs in soil. We developed a field method for enumerating STH eggs in soil and tested the method in Bangladesh and Kenya. The US Environmental Protection Agency (EPA) method for enumerating Ascaris eggs in biosolids was modified through a series of recovery efficiency experiments; we seeded soil samples with a known number of Ascaris suum eggs and assessed the effect of protocol modifications on egg recovery. We found the use of 1% 7X as a surfactant compared to 0.1% Tween 80 significantly improved recovery efficiency (two-sided t-test, t = 5.03, p = 0.007) while other protocol modifications-including different agitation and flotation methods-did not have a significant impact. Soil texture affected the egg recovery efficiency; sandy samples resulted in higher recovery compared to loamy samples processed using the same method (two-sided t-test, t = 2.56, p = 0.083). We documented a recovery efficiency of 73% for the final improved method using loamy soil in the lab. To field test the improved method, we processed soil samples from 100 households in Bangladesh and 100 households in Kenya from June to November 2015. The prevalence of any STH (Ascaris, Trichuris or hookworm) egg in soil was 78% in Bangladesh and 37% in Kenya. The median concentration of STH eggs in soil in positive samples was 0.59 eggs/g dry soil in Bangladesh and 0.15 eggs/g dry soil in Kenya. The prevalence of STH eggs in soil was significantly higher in Bangladesh than Kenya (chi-square, χ2 = 34.39, p < 0.001) as was the concentration (Mann-Whitney, z = 7.10, p < 0.001). This new method allows for detecting STH eggs in soil in low-resource settings and could be used for standardizing soil STH detection globally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borland, M.; Lindberg, R.
2017-06-01
The proposed upgrade of the Advanced Photon Source (APS) to a multibend-achromat lattice requires shorter and much stronger quadrupole magnets than are present in the existing ring. This results in longitudinal gradient profiles that differ significantly from a hard-edge model. Additionally, the lattice assumes the use of five-segment longitudinal gradient dipoles. Under these circumstances, the effects of fringe fields and detailed field distributions are of interest. We evaluated the effect of soft-edge fringe fields on the linear optics and chromaticity, finding that compensation for these effects is readily accomplished. In addition, we evaluated the reliability of standard methods of simulatingmore » hardedge nonlinear fringe effects in quadrupoles.« less
Masonry building envelope analysis
NASA Astrophysics Data System (ADS)
McMullan, Phillip C.
1993-04-01
Over the past five years, infrared thermography has proven an effective tool to assist in required inspections on new masonry construction. However, with more thermographers providing this inspection service, establishing a standard for conducting these inspections is imperative. To attempt to standardize these inspections, it is important to understand the nature of the inspection as well as the context in which the inspection is typically conducted. The inspection focuses on evaluating masonry construction for compliance with the design specifications with regard to structural components and thermal performance of the building envelope. The thermal performance of the building includes both the thermal resistance of the material as well as infiltration/exfiltration characteristics. Given that the inspections occur in the 'field' rather than the controlled environment of a laboratory, there are numerous variables to be considered when undertaking this type of inspection. Both weather and site conditions at the time of the inspection can vary greatly. In this paper we will look at the variables encountered during recent inspections. Additionally, the author will present the standard which was employed in collecting this field data. This method is being incorporated into a new standard to be included in the revised version of 'Guidelines for Specifying and Performing Infrared Inspections' developed by the Infraspection Institute.
Heal, Katherine R; Carlson, Laura Truxal; Devol, Allan H; Armbrust, E Virginia; Moffett, James W; Stahl, David A; Ingalls, Anitra E
2014-11-30
Vitamin B(12) is an essential nutrient for more than half of surveyed marine algae species, but methods for directly measuring this important cofactor in seawater are limited. Current mass spectrometry methods do not quantify all forms of B(12), potentially missing a significant portion of the B(12) pool. We present a method to measure vitamins B(1), B(2), B(6), B(7) and four forms of B(12) dissolved in seawater. The method entails solid-phase extraction, separation by ultra-performance liquid chromatography, and detection by triple-quadrupole tandem mass spectrometry using stable-isotope-labeled internal standards. We demonstrated the use of this method in the environment by analyzing B(12) concentrations at different depths in the Hood Canal, part of the Puget Sound estuarine system in Washington State. Recovery of vitamin B(12) forms during the preconcentration steps was >71% and the limits of detection were <0.275 pM in seawater. Standard addition calibration curves in three different seawater matrices were used to determine analytical response and to quantify samples from the environment. Hydroxocobalamin was the main form of B(12) in seawater at our field site. We developed a method for quantifying four forms of B(12) in seawater by liquid chromatography/mass spectrometry with the option of simultaneous analysis of vitamins B(1), B(2), B(6), and B(7). We validated the method and demonstrated its application in the field. Copyright © 2014 John Wiley & Sons, Ltd.
Tensions in the field: teaching standards of practice in optometry case presentations.
Spafford, Marlee M; Lingard, Lorelei; Schryer, Catherine F; Hrynchak, Patricia K
2004-10-01
Professional identity formation and its relationship to case presentations were studied in an optometry school's onsite clinic. Eight optometry students and six faculty optometrists were audio-recorded during 31 oral case presentations and the teaching exchanges related to them. Using convenience sampling, interviews were audio-recorded of four of the students and four of the optometrists from the field observations. After transcribing these audio-recordings, the research team members applied a grounded theory method to identify, test, and revise emergent themes. The theme reported herein pertains to communicating standards of practice. Faculty optometrists demonstrated three ways of communicating standards of practice to optometry students during case presentations: Official Way, Our Way, and My Way. Although there were differences between these standards, the rationale for the disparities was rarely explicitly articulated by the instructors to the students. Without this information, the incongruity among the standards was left to the students to interpret on their own. The risk created by faculty not articulating the rationale underlying standards of practice was that students misinterpreted the optometrists' ways as idiosyncratic. Thus, opportunities were missed in the educational setting to assist students in making responsible decisions, locating their position in practice, and shaping their professional identity. Competing responsibilities of patient care and student education left instructors with little time to articulate rationale for standards of practice. Therefore, educators must reflect on innovative ways to bring into relief the logic behind their actions when working with novices.
Lapham, W.W.; Wilde, F.D.; Koterba, M.T.
1997-01-01
This is the first of a two-part report to document guidelines and standard procedures of the U.S. Geological Survey for the acquisition of data in ground-water-quality studies. This report provides guidelines and procedures for the selection and installation of wells for water-quality studies/*, and the required or recommended supporting documentation of these activities. Topics include (1) documentation needed for well files, field folders, and electronic files; (2) criteria and information needed for the selection of water-supply and observation wells, including site inventory and data collection during field reconnaissance; and (3) criteria and preparation for installation of monitoring wells, including the effects of equipment and materials on the chemistry of ground-water samples, a summary of drilling and coring methods, and information concerning well completion, development, and disposition.
Alternative to the Palatini method: A new variational principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goenner, Hubert
2010-06-15
A variational principle is suggested within Riemannian geometry, in which an auxiliary metric and the Levi Civita connection are varied independently. The auxiliary metric plays the role of a Lagrange multiplier and introduces nonminimal coupling of matter to the curvature scalar. The field equations are 2nd order PDEs and easier to handle than those following from the so-called Palatini method. Moreover, in contrast to the latter method, no gradients of the matter variables appear. In cosmological modeling, the physics resulting from the alternative variational principle will differ from the modeling using the standard Palatini method.
Evaluating Silent Reading Performance with an Eye Tracking System in Patients with Glaucoma
Murata, Noriaki; Fukuchi, Takeo
2017-01-01
Objective To investigate the relationship between silent reading performance and visual field defects in patients with glaucoma using an eye tracking system. Methods Fifty glaucoma patients (Group G; mean age, 52.2 years, standard deviation: 11.4 years) and 20 normal controls (Group N; mean age, 46.9 years; standard deviation: 17.2 years) were included in the study. All participants in Group G had early to advanced glaucomatous visual field defects but better than 20/20 visual acuity in both eyes. Participants silently read Japanese articles written horizontally while the eye tracking system monitored and calculated reading duration per 100 characters, number of fixations per 100 characters, and mean fixation duration, which were compared with mean deviation and visual field index values from Humphrey visual field testing (24–2 and 10–2 Swedish interactive threshold algorithm standard) of the right versus left eye and the better versus worse eye. Results There was a statistically significant difference between Groups G and N in mean fixation duration (G, 233.4 msec; N, 215.7 msec; P = 0.010). Within Group G, significant correlations were observed between reading duration and 24–2 right mean deviation (rs = -0.280, P = 0.049), 24–2 right visual field index (rs = -0.306, P = 0.030), 24–2 worse visual field index (rs = -0.304, P = 0.032), and 10–2 worse mean deviation (rs = -0.326, P = 0.025). Significant correlations were observed between mean fixation duration and 10–2 left mean deviation (rs = -0.294, P = 0.045) and 10–2 worse mean deviation (rs = -0.306, P = 0.037), respectively. Conclusions The severity of visual field defects may influence some aspects of reading performance. At least concerning silent reading, the visual field of the worse eye is an essential element of smoothness of reading. PMID:28095478
Entanglement-assisted quantum feedback control
NASA Astrophysics Data System (ADS)
Yamamoto, Naoki; Mikami, Tomoaki
2017-07-01
The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.
Reinersman, Phillip N; Carder, Kendall L
2004-05-01
A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.
NASA Astrophysics Data System (ADS)
Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey
2018-05-01
The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.
Compact representations of partially coherent undulator radiation suitable for wave propagation
Lindberg, Ryan R.; Kim, Kwang -Je
2015-09-28
Undulator radiation is partially coherent in the transverse plane, with the degree of coherence depending on the ratio of the electron beam phase space area (emittance) to the characteristic radiation wavelength λ. Numerical codes used to predict x-ray beam line performance can typically only propagate coherent fields from the source to the image plane. We investigate methods for representing partially coherent undulator radiation using a suitably chosen set of coherent fields that can be used in standard wave propagation codes, and discuss such “coherent mode expansions” for arbitrary degrees of coherence. In the limit when the electron beam emittance alongmore » at least one direction is much larger than λ the coherent modes are orthogonal and therefore compact; when the emittance approaches λ in both planes we discuss an economical method of defining the relevant coherent fields that samples the electron beam phase space using low-discrepancy sequences.« less
NASA Astrophysics Data System (ADS)
Various papers on electromagnetic compatibility are presented. Some of the optics considered include: field-to-wire coupling 1 to 18 GHz, SHF/EHF field-to-wire coupling model, numerical method for the analysis of coupling to thin wire structures, spread-spectrum system with an adaptive array for combating interference, technique to select the optimum modulation indices for suppression of undesired signals for simultaneous range and data operations, development of a MHz RF leak detector technique for aircraft harness surveillance, and performance of standard aperture shielding techniques at microwave frequncies. Also discussed are: spectrum efficiency of spread-spectrum systems, control of power supply ripple produced sidebands in microwave transistor amplifiers, an intership SATCOM versus radar electromagnetic interference prediction model, considerations in the design of a broadband E-field sensing system, unique bonding methods for spacecraft, and review of EMC practice for launch vehicle systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesner, A; Poli, G; Beykan, S
Purpose: As the field of Nuclear Medicine moves forward with efforts to integrate radiation dosimetry into clinical practice we can identify the challenge posed by the lack of standardized dose calculation methods and protocols. All personalized internal dosimetry is derived by projecting biodistribution measurements into dosimetry calculations. In an effort to standardize organization of data and its reporting, we have developed, as a sequel to the EANM recommendation of “Good Dosimetry Reporting”, a freely available biodistribution template, which can be used to create a common point of reference for dosimetry data. It can be disseminated, interpreted, and used for methodmore » development widely across the field. Methods: A generalized biodistribution template was built in a comma delineated format (.csv) to be completed by users performing biodistribution measurements. The template is available for free download. The download site includes instructions and other usage details on the template. Results: This is a new resource developed for the community. It is our hope that users will consider integrating it into their dosimetry operations. Having biodistribution data available and easily accessible for all patients processed is a strategy for organizing large amounts of information. It may enable users to create their own databases that can be analyzed for multiple aspects of dosimetry operations. Furthermore, it enables population data to easily be reprocessed using different dosimetry methodologies. With respect to dosimetry-related research and publications, the biodistribution template can be included as supplementary material, and will allow others in the community to better compare calculations and results achieved. Conclusion: As dosimetry in nuclear medicine become more routinely applied in clinical applications, we, as a field, need to develop the infrastructure for handling large amounts of data. Our organ level biodistribution template can be used as a standard format for data collection, organization, as well as for dosimetry research and software development.« less
Some results on numerical methods for hyperbolic conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Huanan.
1989-01-01
This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less
Assessment of soil compaction properties based on surface wave techniques
NASA Astrophysics Data System (ADS)
Jihan Syamimi Jafri, Nur; Rahim, Mohd Asri Ab; Zahid, Mohd Zulham Affandi Mohd; Faizah Bawadi, Nor; Munsif Ahmad, Muhammad; Faizal Mansor, Ahmad; Omar, Wan Mohd Sabki Wan
2018-03-01
Soil compaction plays an important role in every construction activities to reduce risks of any damage. Traditionally, methods of assessing compaction include field tests and invasive penetration tests for compacted areas have great limitations, which caused time-consuming in evaluating large areas. Thus, this study proposed the possibility of using non-invasive surface wave method like Multi-channel Analysis of Surface Wave (MASW) as a useful tool for assessing soil compaction. The aim of this study was to determine the shear wave velocity profiles and field density of compacted soils under varying compaction efforts by using MASW method. Pre and post compaction of MASW survey were conducted at Pauh Campus, UniMAP after applying rolling compaction with variation of passes (2, 6 and 10). Each seismic data was recorded by GEODE seismograph. Sand replacement test was conducted for each survey line to obtain the field density data. All seismic data were processed using SeisImager/SW software. The results show the shear wave velocity profiles increase with the number of passes from 0 to 6 passes, but decrease after 10 passes. This method could attract the interest of geotechnical community, as it can be an alternative tool to the standard test for assessing of soil compaction in the field operation.
Standardization of terminology in field of ionizing radiations and their measurements
NASA Astrophysics Data System (ADS)
Yudin, M. F.; Karaveyev, F. M.
1984-03-01
A new standard terminology was introduced on 1 January 1982 by the Scientific-Technical Commission on All-Union State Standards to cover ionizing radiations and their measurements. It is based on earlier standards such as GOST 15484-74/81, 18445-70/73, 19849-74, 22490-77 as well as the latest recommendations by international committees. One hundred eighty-six terms and definitions in 14 paragraphs are contained. Fundamental concepts, sources and forms of ionizing radiations, characteristics and parameters of ionizing radiations, and methods of measuring their characteristics and parameters are covered. New terms have been added to existing ones. The equivalent English, French, and German terms are also given. The terms measurement of ionizing radiation and transfer of ionizing particles (equivalent of particle fluence of energy fluence) are still under discussion.
NASA Astrophysics Data System (ADS)
Ye, Shiwei; Takahashi, Satoru; Michihata, Masaki; Takamasu, Kiyoshi
2018-05-01
The quality control of microgrooves is extremely crucial to ensure the performance and stability of microstructures and improve their fabrication efficiency. This paper introduces a novel optical inspection method and a modified Linnik microscopic interferometer measurement system to detect the depth of microgrooves with a width less than the diffraction limit. Using this optical method, the depth of diffraction-limited microgrooves can be related to the near-field optical phase difference, which cannot be practically observed but can be computed from practical far-field observations. Thus, a modified Linnik microscopic interferometer system based on three identical objective lenses and an optical path reversibility principle were developed. In addition, experiments for standard grating microgrooves on the silicon surface were carried out to demonstrate the feasibility and repeatability of the proposed method and developed measurement system.
[Welding arc temperature field measurements based on Boltzmann spectrometry].
Si, Hong; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao
2012-09-01
Arc plasma, as non-uniform plasma, has complicated energy and mass transport processes in its internal, so plasma temperature measurement is of great significance. Compared with absolute spectral line intensity method and standard temperature method, Boltzmann plot measuring is more accurate and convenient. Based on the Boltzmann theory, the present paper calculates the temperature distribution of the plasma and analyzes the principle of lines selection by real time scanning the space of the TIG are measurements.
Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D
2018-02-01
OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.
NASA Astrophysics Data System (ADS)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.
2018-05-01
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
CAFE: A New Relativistic MHD Code
NASA Astrophysics Data System (ADS)
Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.
2015-06-01
We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.
Updated methodology for nuclear magnetic resonance characterization of shales
NASA Astrophysics Data System (ADS)
Washburn, Kathryn E.; Birdwell, Justin E.
2013-08-01
Unconventional petroleum resources, particularly in shales, are expected to play an increasingly important role in the world's energy portfolio in the coming years. Nuclear magnetic resonance (NMR), particularly at low-field, provides important information in the evaluation of shale resources. Most of the low-field NMR analyses performed on shale samples rely heavily on standard T1 and T2 measurements. We present a new approach using solid echoes in the measurement of T1 and T1-T2 correlations that addresses some of the challenges encountered when making NMR measurements on shale samples compared to conventional reservoir rocks. Combining these techniques with standard T1 and T2 measurements provides a more complete assessment of the hydrogen-bearing constituents (e.g., bitumen, kerogen, clay-bound water) in shale samples. These methods are applied to immature and pyrolyzed oil shale samples to examine the solid and highly viscous organic phases present during the petroleum generation process. The solid echo measurements produce additional signal in the oil shale samples compared to the standard methodologies, indicating the presence of components undergoing homonuclear dipolar coupling. The results presented here include the first low-field NMR measurements performed on kerogen as well as detailed NMR analysis of highly viscous thermally generated bitumen present in pyrolyzed oil shale.
Updated methodology for nuclear magnetic resonance characterization of shales
Washburn, Kathryn E.; Birdwell, Justin E.
2013-01-01
Unconventional petroleum resources, particularly in shales, are expected to play an increasingly important role in the world’s energy portfolio in the coming years. Nuclear magnetic resonance (NMR), particularly at low-field, provides important information in the evaluation of shale resources. Most of the low-field NMR analyses performed on shale samples rely heavily on standard T1 and T2 measurements. We present a new approach using solid echoes in the measurement of T1 and T1–T2 correlations that addresses some of the challenges encountered when making NMR measurements on shale samples compared to conventional reservoir rocks. Combining these techniques with standard T1 and T2 measurements provides a more complete assessment of the hydrogen-bearing constituents (e.g., bitumen, kerogen, clay-bound water) in shale samples. These methods are applied to immature and pyrolyzed oil shale samples to examine the solid and highly viscous organic phases present during the petroleum generation process. The solid echo measurements produce additional signal in the oil shale samples compared to the standard methodologies, indicating the presence of components undergoing homonuclear dipolar coupling. The results presented here include the first low-field NMR measurements performed on kerogen as well as detailed NMR analysis of highly viscous thermally generated bitumen present in pyrolyzed oil shale.
Characterization of Triaxial Braided Composite Material Properties for Impact Simulation
NASA Technical Reports Server (NTRS)
Roberts, Gary D.; Goldberg, Robert K.; Biniendak, Wieslaw K.; Arnold, William A.; Littell, Justin D.; Kohlman, Lee W.
2009-01-01
The reliability of impact simulations for aircraft components made with triaxial braided carbon fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Improvements to standard quasi-static test methods are needed to account for the large unit cell size and localized damage within the unit cell. The deformation and damage of a triaxial braided composite material was examined using standard quasi-static in-plane tension, compression, and shear tests. Some modifications to standard test specimen geometries are suggested, and methods for measuring the local strain at the onset of failure within the braid unit cell are presented. Deformation and damage at higher strain rates is examined using ballistic impact tests on 61- by 61- by 3.2-mm (24- by 24- by 0.125-in.) composite panels. Digital image correlation techniques were used to examine full-field deformation and damage during both quasi-static and impact tests. An impact analysis method is presented that utilizes both local and global deformation and failure information from the quasi-static tests as input for impact simulations. Improvements that are needed in test and analysis methods for better predictive capability are examined.
Evaluation of a standard test method for screening fuels in soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, S.S.; Schabron, J.F.
1996-12-31
A new screening method for fuel contamination in soils was recently developed as American Society for Testing and Materials (ASTM) Method D-5831-95, Standard Test Method for Screening Fuels in Soils. This method uses low-toxicity chemicals and can be sued to screen organic- rich soils, as well as being fast, easy, and inexpensive to perform. Fuels containing aromatic compounds, such as diesel fuel and gasoline, as well as other aromatic-containing hydrocarbon materials, such as motor oil, crude oil, and cola oil, can be determined. The screening method for fuels in soils was evaluated by conducting a Collaborative study on the method.more » In the Collaborative study, a sand and an organic soil spiked with various concentrations of diesel fuel were tested. Data from the Collaborative study were used to determine the reproducibility (between participants) and repeatability (within participants) precision of the method for screening the test materials. The Collaborative study data also provide information on the performance of portable field equipment (patent pending) versus laboratory equipment for performing the screening method and a comparison of diesel concentration values determined using the screening method versus a laboratory method.« less
Briers, G E; Lindner, J R; Shinn, G C; Wingenbach, G W; Baker, M T
2010-01-01
Agricultural and extension education--or some derivative name--is a field of study leading to the doctoral degree in universities around the world. Is there are body of knowledge or a taxonomy of the knowledge--e.g., a knowledge domain--that one should possess with a doctorate in agricultural and extension education? The purpose of this paper was to synthesize the work of researchers who attempted to define the field of study, with a taxonomy comprising the knowledge domains (standards) and knowledge objects--structured interrelated sets of data, knowledge, and wisdom--of the field of study. Doctoral study in agricultural and extension education needs a document that provides for rules and guidelines--rules and guidelines that in turn provide for common and repeated use--all leading to achievement of an optimum degree of order in the context of academic, scholarly, and professional practice in agricultural and extension education. Thus, one would know in broad categories the knowledge, skills, and abilities possessed by one who holds a doctoral degree in agricultural and extension education. That is, there would exist a standard for doctoral degrees in agricultural and extension education. A content analysis of three previous attempts to categorize knowledge in agricultural and extension education served as the primary technique to create a new taxonomy--or to confirm an existing taxonomy--for doctoral study in agricultural and extension education. The following coalesced as nine essential knowledge domains for a doctorate in agricultural and extension education: (1) history, philosophy, ethics, and policy; (2) agricultural/rural development; (3) organizational development and change management; (4) planning, needs assessment, and evaluation; (5) learning theory; (6) curriculum development and instructional design; (7) teaching methods and delivery strategies; (8) research methods and tools; and, (9) scholarship and communications.
Criado-García, Laura; Garrido-Delgado, Rocío; Arce, Lourdes; Valcárcel, Miguel
2013-07-15
An UV-Ion Mobility Spectrometer is a simple, rapid, inexpensive instrument widely used in environmental analysis among other fields. The advantageous features of its underlying technology can be of great help towards developing reliable, economical methods for determining gaseous compounds from gaseous, liquid and solid samples. Developing an effective method using UV-Ion Mobility Spectrometry (UV-IMS) to determine volatile analytes entails using appropriate gaseous standards for calibrating the spectrometer. In this work, two home-made sample introduction systems (SISs) and a commercial gas generator were used to obtain such gaseous standards. The first home-made SIS used was a static head-space to measure compounds present in liquid samples and the other home-made system was an exponential dilution set-up to measure compounds present in gaseous samples. Gaseous compounds generated by each method were determined on-line by UV-IMS. Target analytes chosen for this comparative study were ethanol, acetone, benzene, toluene, ethylbenzene and xylene isomers. The different alternatives were acceptable in terms of sensitivity, precision and selectivity. Copyright © 2013 Elsevier B.V. All rights reserved.
A standardized mean difference effect size for multiple baseline designs across individuals.
Hedges, Larry V; Pustejovsky, James E; Shadish, William R
2013-12-01
Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the standardized mean difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)(k) design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications. Copyright © 2013 John Wiley & Sons, Ltd.
Gryz, Krzysztof; Karpowicz, Jolanta
2006-01-01
The investigation of the occupational exposure to electromagnetic fields from electrosurgery devices were done (according to the requirements of Polish Standard PN-T-06580:2002). The exposure was evaluated following the criteria established by occupational safety and health regulations. The measurements and evaluation of the currents flowing through the exposed workers body were also conducted following the method and criteria published by IEEE standard and European Directive 2004/40/EC. It was found that in the vicinity of electrosurgical devices, the area of electromagnetic fields to which only workers operating the source of field should be exposed can exist up to the distance of 70 cm from the active electrode and supplying cables. In the case when the cables are placed directly on the surgeon body or long duration of the daily exposure the overexposure of workers can appear (referring to Polish regulations). The current flowing through the arm of surgeon keeping the electrode with electric field of the maximum strength (app. 1000 V/m or higher) can exceed permissible value of 40 mA established by the Directive 2004/40/EC for contact current. The reduction of the surgeon exposure can be reached by the proper positioning of the cables supplying monopolar electrode or by the use of bipolar electrode.
Tourab, Wafa; Babouri, Abdesselam
2016-06-01
This work presents an experimental and modeling study of the electromagnetic environment in the vicinity of a high voltage substation located in eastern Algeria (Annaba city) specified with a very high population density. The effects of electromagnetic fields emanating from the coupled multi-lines high voltage power systems (MLHV) on the health of the workers and people living in proximity of substations has been analyzed. Experimental Measurements for the Multi-lines power system proposed have been conducted in the free space under the high voltage lines. Field's intensities were measured using a referenced and calibrated electromagnetic field meter PMM8053B for the levels 0 m, 1 m, 1.5 m and 1.8 m witch present the sensitive's parts as organs and major functions (head, heart, pelvis and feet) of the human body. The measurement results were validated by numerical simulation using the finite element method and these results are compared with the limit values of the international standards. We project to set own national standards for exposure to electromagnetic fields, in order to achieve a regional database that will be at the disposal of partners concerned to ensure safety of people and mainly workers inside high voltage electrical substations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderon, E; Siergiej, D
2014-06-01
Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less
Entanglement entropy of electromagnetic edge modes.
Donnelly, William; Wall, Aron C
2015-03-20
The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions.
NASA Astrophysics Data System (ADS)
Adshead, Peter; Giblin, John T.; Weiner, Zachary J.
2017-12-01
We study preheating in models where a scalar inflaton is directly coupled to a non-Abelian S U (2 ) gauge field. In particular, we examine m2ϕ2 inflation with a conformal, dilatonlike coupling to the non-Abelian sector. We describe a numerical scheme that combines lattice gauge theory with standard finite difference methods applied to the scalar field. We show that a significant tachyonic instability allows for efficient preheating, which is parametrically suppressed by increasing the non-Abelian self-coupling. Additionally, we comment on the technical implementation of the evolution scheme and setting initial conditions.
The receptive field is dead. Long live the receptive field?
Fairhall, Adrienne
2014-01-01
Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves these representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function? PMID:24618227
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.