An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1990-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.
Nuclear measurement of subgrade moisture.
DOT National Transportation Integrated Search
1973-01-01
The basic consideration in evaluating subgrade moisture conditions under pavements is the selection of a method of determining moisture contents that is sufficiently accurate and can be used with minimal effort, interference with traffic, and recalib...
Gasometric Determination of CO[subscript 2] Released from Carbonate Materials
ERIC Educational Resources Information Center
Fagerlund, Johan; Zevenhoven, Ron; Hulden, Stig-Goran; Sodergard, Berndt
2010-01-01
To determine the carbonation degree of materials used in mineral carbonation experiments, a fast, simple, and sufficiently accurate method is required. For this purpose, a method based on the reaction between carbonates and hydrochloric acid was developed. It was noted that this method could also be used to teach undergraduate students some basic…
29 CFR 1926.60 - Methylenedianiline.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the following: (i) Construction, alteration, repair, maintenance, or renovation of structures... based are scientifically sound and were collected using methods that are sufficiently accurate and... are substantially similar. The data must be scientifically sound, the characteristics of the MDA...
29 CFR 1926.60 - Methylenedianiline.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the following: (i) Construction, alteration, repair, maintenance, or renovation of structures... based are scientifically sound and were collected using methods that are sufficiently accurate and... are substantially similar. The data must be scientifically sound, the characteristics of the MDA...
29 CFR 1926.60 - Methylenedianiline.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the following: (i) Construction, alteration, repair, maintenance, or renovation of structures... based are scientifically sound and were collected using methods that are sufficiently accurate and... are substantially similar. The data must be scientifically sound, the characteristics of the MDA...
Characterizing dispersal patterns in a threatened seabird with limited genetic structure
Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery
2009-01-01
Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...
Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen
2015-11-01
In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. Copyright © 2015 Elsevier Inc. All rights reserved.
A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law
Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen
2015-01-01
In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556
A new flux-conserving numerical scheme for the steady, incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Scott, James R.
1994-01-01
This paper is concerned with the continued development of a new numerical method, the space-time solution element (STS) method, for solving conservation laws. The present work focuses on the two-dimensional, steady, incompressible Navier-Stokes equations. Using first an integral approach, and then a differential approach, the discrete flux conservation equations presented in a recent paper are rederived. Here a simpler method for determining the flux expressions at cell interfaces is given; a systematic and rigorous derivation of the conditions used to simulate the differential form of the governing conservation law(s) is provided; necessary and sufficient conditions for a discrete approximation to satisfy a conservation law in E2 are derived; and an estimate of the local truncation error is given. A specific scheme is then constructed for the solution of the thin airfoil boundary layer problem. Numerical results are presented which demonstrate the ability of the scheme to accurately resolve the developing boundary layer and wake regions using grids which are much coarser than those employed by other numerical methods. It is shown that ten cells in the cross-stream direction are sufficient to accurately resolve the developing airfoil boundary layer.
Integrals for IBS and beam cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.; /Fermilab
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
Integrals for IBS and Beam Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
Stability of rigid rotors supported by air foil bearings: Comparison of two fundamental approaches
NASA Astrophysics Data System (ADS)
Larsen, Jon S.; Santos, Ilmar F.; von Osmanski, Sebastian
2016-10-01
High speed direct drive motors enable the use of Air Foil Bearings (AFB) in a wide range of applications due to the elimination of gear forces. Unfortunately, AFB supported rotors are lightly damped, and an accurate prediction of their Onset Speed of Instability (OSI) is therefore important. This paper compares two fundamental methods for predicting the OSI. One is based on a nonlinear time domain simulation and another is based on a linearised frequency domain method and a perturbation of the Reynolds equation. Both methods are based on equivalent models and should predict similar results. Significant discrepancies are observed leading to the question, is the classical frequency domain method sufficiently accurate? The discrepancies and possible explanations are discussed in detail.
An equivalent domain integral for analysis of two-dimensional mixed mode problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1989-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
Is School Funding Fair? A National Report Card
ERIC Educational Resources Information Center
Baker, Bruce D.; Sciarra, David G.; Farrie, Danielle
2010-01-01
Building a more accurate, reliable and consistent method of analyzing how states fund public education starts with a critical question: What is fair school funding? In this report, "fair" school funding is defined as a state finance system that ensures equal educational opportunity by providing a sufficient level of funding distributed…
Radiographic evaluation of BFX acetabular component position in dogs.
Renwick, Alasdair; Gemmill, Toby; Pink, Jonathan; Brodbelt, David; McKee, Malcolm
2011-07-01
To assess the reliability of radiographic measurement of angle of lateral opening (ALO) and angle of version of BFX acetabular cups. In vitro radiographic study. BFX cups (24, 28, and 32 mm). Total hip replacement constructs (cups, 17 mm femoral head and a #7 CFX stem) were mounted on an inclinometer. Ventrodorsal radiographs were obtained with ALO varying between 21° and 70° and inclination set at 0°, 10°, 20°, and 30°. Radiographs were randomized using a random sequence generator. Three observers blinded to the radiograph order assessed ALO using 3 methods: (1) an ellipse method based on trigonometry; (2) using a measurement from the center of the femoral head to the truncated surface of the cup; (3) by visual estimation using a reference chart. Version was measured by assessing the ventral edge of the truncated surface. ALO methods 2 and 3 were accurate and precise to within 10° and were significantly more accurate and precise than method 1 (P < .001). All methods were significantly less accurate with increasing inclination. Version measurement was accurate and precise to within 7° with 0-20° of inclination, but significantly less accurate with 30° of inclination. Methods 2 and 3, but not method 1, were sufficiently accurate and precise to be clinically useful. Version measurement was clinically useful when inclination was ≤ 20°. © Copyright 2011 by The American College of Veterinary Surgeons.
Can the electronegativity equalization method predict spectroscopic properties?
Verstraelen, T; Bultinck, P
2015-02-05
The electronegativity equalization method is classically used as a method allowing the fast generation of atomic charges using a set of calibrated parameters and provided knowledge of the molecular structure. Recently, it has started being used for the calculation of other reactivity descriptors and for the development of polarizable and reactive force fields. For such applications, it is of interest to know whether the method, through the inclusion of the molecular geometry in the Taylor expansion of the energy, would also allow sufficiently accurate predictions of spectroscopic data. In this work, relevant quantities for IR spectroscopy are considered, namely the dipole derivatives and the Cartesian Hessian. Despite careful calibration of parameters for this specific task, it is shown that the current models yield insufficiently accurate results. Copyright © 2013 Elsevier B.V. All rights reserved.
Design and control of a macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff
1993-01-01
Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.
40 CFR Appendix B to Part 61 - Test Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...
40 CFR Appendix B to Part 61 - Test Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...
40 CFR Appendix B to Part 61 - Test Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...
40 CFR Appendix B to Part 61 - Test Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...
40 CFR Appendix B to Part 61 - Test Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... below 28 liters/min (1.0 cfm). 8.2.2Perform test runs such that samples are obtained over a period or... cyclic operations, run sufficient tests for the accurate determination of the emissions that occur over... indicated by reddening (liberation of free iodine) in the first impinger. In these cases, the sample run may...
Van Duren, B H; Pandit, H; Beard, D J; Murray, D W; Gill, H S
2009-04-01
The recent development in Oxford lateral unicompartmental knee arthroplasty (UKA) design requires a valid method of assessing its kinematics. In particular, the use of single plane fluoroscopy to reconstruct the 3D kinematics of the implanted knee. The method has been used previously to investigate the kinematics of UKA, but mostly it has been used in conjunction with total knee arthroplasty (TKA). However, no accuracy assessment of the method when used for UKA has previously been reported. In this study we performed computer simulation tests to investigate the effect of the different geometry of the unicompartmental implant has on the accuracy of the method in comparison to the total knee implants. A phantom was built to perform in vitro tests to determine the accuracy of the method for UKA. The computer simulations suggested that the use of the method for UKA would prove less accurate than for TKA's. The rotational degrees of freedom for the femur showed greatest disparity between the UKA and TKA. The phantom tests showed that the in-plane translations were accurate to <0.5mm RMS and the out-of-plane translations were less accurate with 4.1mm RMS. The rotational accuracies were between 0.6 degrees and 2.3 degrees which are less accurate than those reported in the literature for TKA, however, the method is sufficient for studying overall knee kinematics.
All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.
Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne
2015-04-28
Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.
Ota, Hiroyuki; Lim, Tae-Kyu; Tanaka, Tsuyoshi; Yoshino, Tomoko; Harada, Manabu; Matsunaga, Tadashi
2006-09-18
A novel, automated system, PNE-1080, equipped with eight automated pestle units and a spectrophotometer was developed for genomic DNA extraction from maize using aminosilane-modified bacterial magnetic particles (BMPs). The use of aminosilane-modified BMPs allowed highly accurate DNA recovery. The (A(260)-A(320)):(A(280)-A(320)) ratio of the extracted DNA was 1.9+/-0.1. The DNA quality was sufficiently pure for PCR analysis. The PNE-1080 offered rapid assay completion (30 min) with high accuracy. Furthermore, the results of real-time PCR confirmed that our proposed method permitted the accurate determination of genetically modified DNA composition and correlated well with results obtained by conventional cetyltrimethylammonium bromide (CTAB)-based methods.
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
Regge calculus and observations. II. Further applications.
NASA Astrophysics Data System (ADS)
Williams, Ruth M.; Ellis, G. F. R.
1984-11-01
The method, developed in an earlier paper, for tracing geodesies of particles and light rays through Regge calculus space-times, is applied to a number of problems in the Schwarzschild geometry. It is possible to obtain accurate predictions of light bending by taking sufficiently small Regge blocks. Calculations of perihelion precession, Thomas precession, and the distortion of a ball of fluid moving on a geodesic can also show good agreement with the analytic solution. However difficulties arise in obtaining accurate predictions for general orbits in these space-times. Applications to other problems in general relativity are discussed briefly.
Using the Nudge and Shove Methods to Adjust Item Difficulty Values.
Royal, Kenneth D
2015-01-01
In any examination, it is important that a sufficient mix of items with varying degrees of difficulty be present to produce desirable psychometric properties and increase instructors' ability to make appropriate and accurate inferences about what a student knows and/or can do. The purpose of this "teaching tip" is to demonstrate how examination items can be affected by the quality of distractors, and to present a simple method for adjusting items to meet difficulty specifications.
Characterization of in-flight performance of ion propulsion systems
NASA Astrophysics Data System (ADS)
Sovey, James S.; Rawlin, Vincent K.
1993-06-01
In-flight measurements of ion propulsion performance, ground test calibrations, and diagnostic performance measurements were reviewed. It was found that accelerometers provided the most accurate in-flight thrust measurements compared with four other methods that were surveyed. An experiment has also demonstrated that pre-flight alignment of the thrust vector was sufficiently accurate so that gimbal adjustments and use of attitude control thrusters were not required to counter disturbance torques caused by thrust vector misalignment. The effects of facility background pressure, facility enhanced charge-exchange reactions, and contamination on ground-based performance measurements are also discussed. Vacuum facility pressures for inert-gas ion thruster life tests and flight qualification tests will have to be less than 2 mPa to ensure accurate performance measurements.
Characterization of in-flight performance of ion propulsion systems
NASA Technical Reports Server (NTRS)
Sovey, James S.; Rawlin, Vincent K.
1993-01-01
In-flight measurements of ion propulsion performance, ground test calibrations, and diagnostic performance measurements were reviewed. It was found that accelerometers provided the most accurate in-flight thrust measurements compared with four other methods that were surveyed. An experiment has also demonstrated that pre-flight alignment of the thrust vector was sufficiently accurate so that gimbal adjustments and use of attitude control thrusters were not required to counter disturbance torques caused by thrust vector misalignment. The effects of facility background pressure, facility enhanced charge-exchange reactions, and contamination on ground-based performance measurements are also discussed. Vacuum facility pressures for inert-gas ion thruster life tests and flight qualification tests will have to be less than 2 mPa to ensure accurate performance measurements.
Determination of molybenum in soils and rocks: A geochemical semimicro field method
Ward, F.N.
1951-01-01
Reconnaissance work in geochemical prospecting requires a simple, rapid, and moderately accurate method for the determination of small amounts of molybdenum in soils and rocks. The useful range of the suggested procedure is from 1 to 32 p.p.m. of molybdenum, but the upper limit can be extended. Duplicate determinations on eight soil samples containing less than 10 p.p.m. of molybdenum agree within 1 p.p.m., and a comparison of field results with those obtained by a conventional laboratory procedure shows that the method is sufficiently accurate for use in geochemical prospecting. The time required for analysis and the quantities of reagents needed have been decreased to provide essentially a "test tube" method for the determination of molybdenum in soils and rocks. With a minimum amount of skill, one analyst can make 30 molybdenum determinations in an 8-hour day.
Lisa J. Bate; Michael J. Wisdom; Barbara C. Wales
2007-01-01
A key element of forest management is the maintenance of sufficient densities of snags (standing dead trees) to support associated wildlife. Management factors that influence snag densities, however, are numerous and complex. Consequently, accurate methods to estimate and model snag densities are needed. Using data collected in 2002 and Current Vegetation Survey (CVS)...
NASA Technical Reports Server (NTRS)
Morduchow, Morris
1955-01-01
A survey of integral methods in laminar-boundary-layer analysis is first given. A simple and sufficiently accurate method for practical purposes of calculating the properties (including stability) of the laminar compressible boundary layer in an axial pressure gradient with heat transfer at the wall is presented. For flow over a flat plate, the method is applicable for an arbitrarily prescribed distribution of temperature along the surface and for any given constant Prandtl number close to unity. For flow in a pressure gradient, the method is based on a Prandtl number of unity and a uniform wall temperature. A simple and accurate method of determining the separation point in a compressible flow with an adverse pressure gradient over a surface at a given uniform wall temperature is developed. The analysis is based on an extension of the Karman-Pohlhausen method to the momentum and the thermal energy equations in conjunction with fourth- and especially higher degree velocity and stagnation-enthalpy profiles.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Swobodnik, W; Klüppelberg, U; Wechsler, J G; Volz, M; Normandin, G; Ditschuneit, H
1985-05-03
This paper introduces a new method to detect the taurine and glycine conjugates of five different bile acids (cholic acid, deoxycholic acid, chenodeoxycholic acid, ursodeoxycholic acid and lithocholic acid) in human bile. Advantages of this method are sufficient separation of compounds within a short period of time and a high rate of reproducibility. Using a mobile phase gradient of acetonitrile and water, modified with tetrabutylammonium hydrogen sulphate (0.0075 mol/l), we were able to maximize the differentiation between ursodeoxycholic acid and lithocholic acid, which is of primary interest during conservative gallstone dissolution therapy. Use of this gradient reduced analysis time to less than 0.5 h. Recovery rates for this modified method ranged from 94% to 100%, and reproducibility was 98%, sufficient for routine clinical applications.
Regional measurement of body nitrogen
NASA Technical Reports Server (NTRS)
Palmer, H. E.
1976-01-01
Studies of methods for determining changes in the muscle mass of arms and legs are described. N-13 measurements were made in phantom and cadaver parts after neutron irradiation. The reproducibility in these measurements was found to be excellent and the radiation dose required to provide sufficient activation was determined. Potassium-40 measurements were made on persons who lost muscle mass due to leg injuries. It appears that K-40 measurements may provide the most accurate and convenient method for determining muscle mass changes.
ERIC Educational Resources Information Center
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
A general method for motion compensation in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Biguri, Ander; Dosanjh, Manjit; Hancock, Steven; Soleimani, Manuchehr
2017-08-01
Motion during data acquisition is a known source of error in medical tomography, resulting in blur artefacts in the regions that move. It is critical to reduce these artefacts in applications such as image-guided radiation therapy as a clearer image translates into a more accurate treatment and the sparing of healthy tissue close to a tumour site. Most research in 4D x-ray tomography involving the thorax relies on respiratory phase binning of the acquired data and reconstructing each of a set of images using the limited subset of data per phase. In this work, we demonstrate a motion-compensation method to reconstruct images from the complete dataset taken during breathing without recourse to phase-binning or breath-hold techniques. As long as the motion is sufficiently well known, the new method can accurately reconstruct an image at any time during the acquisition time span. It can be applied to any iterative reconstruction algorithm.
A general method for motion compensation in x-ray computed tomography.
Biguri, Ander; Dosanjh, Manjit; Hancock, Steven; Soleimani, Manuchehr
2017-07-24
Motion during data acquisition is a known source of error in medical tomography, resulting in blur artefacts in the regions that move. It is critical to reduce these artefacts in applications such as image-guided radiation therapy as a clearer image translates into a more accurate treatment and the sparing of healthy tissue close to a tumour site. Most research in 4D x-ray tomography involving the thorax relies on respiratory phase binning of the acquired data and reconstructing each of a set of images using the limited subset of data per phase. In this work, we demonstrate a motion-compensation method to reconstruct images from the complete dataset taken during breathing without recourse to phase-binning or breath-hold techniques. As long as the motion is sufficiently well known, the new method can accurately reconstruct an image at any time during the acquisition time span. It can be applied to any iterative reconstruction algorithm.
Low-speed airspeed calibration data for a single-engine research-support aircraft
NASA Technical Reports Server (NTRS)
Holmes, B. J.
1980-01-01
A standard service airspeed system on a single engine research support airplane was calibrated by the trailing anemometer method. The effects of flaps, power, sideslip, and lag were evaluated. The factory supplied airspeed calibrations were not sufficiently accurate for high accuracy flight research applications. The trailing anemometer airspeed calibration was conducted to provide the capability to use the research support airplane to perform pace aircraft airspeed calibrations.
Waste Characterization Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle
2016-02-02
This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.
A macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Wang, Yulun
1993-01-01
This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.
Elementary solutions of coupled model equations in the kinetic theory of gases
NASA Technical Reports Server (NTRS)
Kriese, J. T.; Siewert, C. E.; Chang, T. S.
1974-01-01
The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.
NASA Technical Reports Server (NTRS)
Stroosnijder, L.; Lascano, R. J.; Newton, R. W.; Vanbavel, C. H. M.
1984-01-01
A general method to use a time series of L-band emissivities as an input to a hydrological model for continuously monitoring the net rainfall and evaporation as well as the water content over the entire soil profile is proposed. The model requires a sufficiently accurate and general relation between soil emissivity and surface moisture content. A model which requires the soil hydraulic properties as an additional input, but does not need any weather data was developed. The method is shown to be numerically consistent.
Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.
Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng
2015-06-10
In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Unsupervised Topic Discovery by Anomaly Detection
2013-09-01
Kullback , and R. A. Leibler , “On information and sufficiency,” Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951. [14] S. Basu, A...read known publicly. There is a strong interest in the analysis of these opinions and comments as they provide useful information about the sentiments...them as topics. The difficulty in this approach is finding a good set of keywords that accurately represents the documents. The method used to
A novel endoscopic fluorescent band ligation method for tumor localization.
Hyun, Jong Hee; Kim, Seok-Ki; Kim, Kwang Gi; Kim, Hong Rae; Lee, Hyun Min; Park, Sunup; Kim, Sung Chun; Choi, Yongdoo; Sohn, Dae Kyung
2016-10-01
Accurate tumor localization is essential for minimally invasive surgery. This study describes the development of a novel endoscopic fluorescent band ligation method for the rapid and accurate identification of tumor sites during surgery. The method utilized a fluorescent rubber band, made of indocyanine green (ICG) and a liquid rubber solution mixture, as well as a near-infrared fluorescence laparoscopic system with a dual light source using a high-powered light-emitting diode (LED) and a 785-nm laser diode. The fluorescent rubber bands were endoscopically placed on the mucosae of porcine stomachs and colons. During subsequent conventional laparoscopic stomach and colon surgery, the fluorescent bands were assayed using the near-infrared fluorescence laparoscopy system. The locations of the fluorescent clips were clearly identified on the fluorescence images in real time. The system was able to distinguish the two or three bands marked on the mucosal surfaces of the stomach and colon. Resection margins around the fluorescent bands were sufficient in the resected specimens obtained during stomach and colon surgery. These novel endoscopic fluorescent bands could be rapidly and accurately localized during stomach and colon surgery. Use of these bands may make possible the excision of exact target sites during minimally invasive gastrointestinal surgery.
Identifying X-consumers using causal recipes: "whales" and "jumbo shrimps" casino gamblers.
Woodside, Arch G; Zhang, Mann
2012-03-01
X-consumers are the extremely frequent (top 2-3%) users who typically consume 25% of a product category. This article shows how to use fuzzy-set qualitative comparative analysis (QCA) to provide "causal recipes" sufficient for profiling X-consumers accurately. The study extends Dik Twedt's "heavy-half" product users for building theory and strategies to nurture or control X-behavior. The study here applies QCA to offer configurations that are sufficient in identifying "whales" and "jumbo shrimps" among X-casino gamblers. The findings support the principle that not all X-consumers are alike. The theory and method are applicable for identifying the degree of consistency and coverage of alternative X-consumers among users of all product-service category and brands.
Use of historical control data for assessing treatment effects in clinical trials.
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study's control arm. There is obvious appeal in using (i.e., 'borrowing') this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of 'dynamic' (versus 'static') borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. Copyright © 2013 John Wiley & Sons, Ltd.
Use of historical control data for assessing treatment effects in clinical trials
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G.; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study’s control arm. There is obvious appeal in using (i.e., ‘borrowing’) this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of ‘dynamic’ (versus ‘static’) borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. PMID:23913901
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.
Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong
2006-10-01
We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.
Mobayen, Saleh
2018-06-01
This paper proposes a combination of composite nonlinear feedback and integral sliding mode techniques for fast and accurate chaos synchronization of uncertain chaotic systems with Lipschitz nonlinear functions, time-varying delays and disturbances. The composite nonlinear feedback method allows accurate following of the master chaotic system and the integral sliding mode control provides invariance property which rejects the perturbations and preserves the stability of the closed-loop system. Based on the Lyapunov- Krasovskii stability theory and linear matrix inequalities, a novel sufficient condition is offered for the chaos synchronization of uncertain chaotic systems. This method not only guarantees the robustness against perturbations and time-delays, but also eliminates reaching phase and avoids chattering problem. Simulation results demonstrate that the suggested procedure leads to a great control performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Computation of records of streamflow at control structures
Collins, Dannie L.
1977-01-01
Traditional methods of computing streamflow records on large, low-gradient streams require a continuous record of water-surface slope over a natural channel reach. This slope must be of sufficient magnitude to be accuratly measured with available stage measuring devices. On highly regulated streams, this slope approaches zero during periods of low flow and accurate measurement is difficult. Methods are described to calibrate multipurpose regulating control structures to more accurately compute streamflow records on highly-regulated streams. Hydraulic theory, assuming steady, uniform flow during a computational interval, is described for five different types of flow control. The controls are: Tainter gates, hydraulic turbines, fixed spillways, navigation locks, and crest gates. Detailed calibration procedures are described for the five different controls as well as for several flow regimes for some of the controls. The instrumentation package and computer programs necessary to collect and process the field data are discussed. Two typical calibration procedures and measurement data are presented to illustrate the accuracy of the methods. (Woodard-USGS)
Optimization of cutting parameters for machining time in turning process
NASA Astrophysics Data System (ADS)
Mavliutov, A. R.; Zlotnikov, E. G.
2018-03-01
This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.
Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane
2015-01-01
Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories. PMID:26630170
Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane
2015-01-01
Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories.
Ultrasonic technique for characterizing skin burns
Goans, Ronald E.; Cantrell, Jr., John H.; Meyers, F. Bradford; Stambaugh, Harry D.
1978-01-01
This invention, a method for ultrasonically determining the depth of a skin burn, is based on the finding that the acoustical impedance of burned tissue differs sufficiently from that of live tissue to permit ultrasonic detection of the interface between the burn and the underlying unburned tissue. The method is simple, rapid, and accurate. As compared with conventional practice, it provides the important advantage of permitting much earlier determination of whether a burn is of the first, second, or third degree. In the case of severe burns, the usual two - to three-week delay before surgery may be reduced to about 3 days or less.
Improving Acoustic Models by Watching Television
NASA Technical Reports Server (NTRS)
Witbrock, Michael J.; Hauptmann, Alexander G.
1998-01-01
Obtaining sufficient labelled training data is a persistent difficulty for speech recognition research. Although well transcribed data is expensive to produce, there is a constant stream of challenging speech data and poor transcription broadcast as closed-captioned television. We describe a reliable unsupervised method for identifying accurately transcribed sections of these broadcasts, and show how these segments can be used to train a recognition system. Starting from acoustic models trained on the Wall Street Journal database, a single iteration of our training method reduced the word error rate on an independent broadcast television news test set from 62.2% to 59.5%.
The accuracy of ultrasound for measurement of mobile- bearing motion.
Aigner, Christian; Radl, Roman; Pechmann, Michael; Rehak, Peter; Stacher, Rudolf; Windhager, Reinhard
2004-04-01
After anterior cruciate ligament-sacrificing total knee replacement, mobile bearings sometimes have paradoxic movement but the implications of such movement on function, wear, and implant survival are not known. To study this potential problem accurate, reliable, and widely available inexpensive tools for in vivo mobile-bearing motion analyses are needed. We developed a method using an 8-MHz ultrasound to analyze mobile-bearing motion and ascertained accuracy, precision, and reliability compared with plain and standard digital radiographs. The anterior rim of the mobile bearing was the target for all methods. The radiographs were taken in a horizontal plane at neutral rotation and incremental external and internal rotations. Five investigators examined four positions of the mobile bearing with all three methods. The accuracy and precision were: ultrasound, 0.7 mm and 0.2 mm; digital radiograph, 0.4 mm and 0.2 mm; and plain radiographs, 0.7 mm and 0.3 mm. The interrater and intrarater reliability ranged between 0.3 to 0.4 mm and 0.1 to 0.2 mm, respectively. The difference between the methods was not significant for neutral rotation but ultrasound was significantly more accurate than any one degree of rotation or higher. Ultrasound of 8 MHz provides an accuracy and reliability that is suitable for evaluation of in vivo meniscal bearing motion. Whether this method or others are sufficiently accurate to detect motion leading to abnormal wear is not known.
Monitoring Marine Weather Systems Using Quikscat and TRMM Data
NASA Technical Reports Server (NTRS)
Liu, W.; Tang, W.; Datta, A.; Hsu, C.
1999-01-01
We do not understand nor are able to predict marine storms, particularly tropical cyclones, sufficiently well because ground-based measurements are sparse and operational numerical weather prediction models do not have sufficient spatial resolution nor accurate parameterization of the physics.
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1989-01-01
An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.
Tian, Kai; Chen, Xiaowei; Luan, Binquan; Singh, Prashant; Yang, Zhiyu; Gates, Kent S; Lin, Mengshi; Mustapha, Azlin; Gu, Li-Qun
2018-05-22
Accurate and rapid detection of single-nucleotide polymorphism (SNP) in pathogenic mutants is crucial for many fields such as food safety regulation and disease diagnostics. Current detection methods involve laborious sample preparations and expensive characterizations. Here, we investigated a single locked nucleic acid (LNA) approach, facilitated by a nanopore single-molecule sensor, to accurately determine SNPs for detection of Shiga toxin producing Escherichia coli (STEC) serotype O157:H7, and cancer-derived EGFR L858R and KRAS G12D driver mutations. Current LNA applications that require incorporation and optimization of multiple LNA nucleotides. But we found that in the nanopore system, a single LNA introduced in the probe is sufficient to enhance the SNP discrimination capability by over 10-fold, allowing accurate detection of the pathogenic mutant DNA mixed in a large amount of the wild-type DNA. Importantly, the molecular mechanistic study suggests that such a significant improvement is due to the effect of the single-LNA that both stabilizes the fully matched base-pair and destabilizes the mismatched base-pair. This sensitive method, with a simplified, low cost, easy-to-operate LNA design, could be generalized for various applications that need rapid and accurate identification of single-nucleotide variations.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Nariai, N; Kim, S; Imoto, S; Miyano, S
2004-01-01
We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.
Forecasting VaR and ES of stock index portfolio: A Vine copula method
NASA Astrophysics Data System (ADS)
Zhang, Bangzheng; Wei, Yu; Yu, Jiang; Lai, Xiaodong; Peng, Zhenfeng
2014-12-01
Risk measurement has both theoretical and practical significance in risk management. Using daily sample of 10 international stock indices, firstly this paper models the internal structures among different stock markets with C-Vine, D-Vine and R-Vine copula models. Secondly, the Value-at-Risk (VaR) and Expected Shortfall (ES) of the international stock markets portfolio are forecasted using Monte Carlo method based on the estimated dependence of different Vine copulas. Finally, the accuracy of VaR and ES measurements obtained from different statistical models are evaluated by UC, IND, CC and Posterior analysis. The empirical results show that the VaR forecasts at the quantile levels of 0.9, 0.95, 0.975 and 0.99 with three kinds of Vine copula models are sufficiently accurate. Several traditional methods, such as historical simulation, mean-variance and DCC-GARCH models, fail to pass the CC backtesting. The Vine copula methods can accurately forecast the ES of the portfolio on the base of VaR measurement, and D-Vine copula model is superior to other Vine copulas.
Performance of quantum Monte Carlo for calculating molecular bond lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au
2016-03-28
This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF.more » The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.« less
A Visual Servoing-Based Method for ProCam Systems Calibration
Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie
2013-01-01
Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121
Muñoz, Mario A; Smith-Miles, Kate A
2017-01-01
This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
Measurement of bedload transport in sand-bed rivers: a look at two indirect sampling methods
Holmes, Robert R.; Gray, John R.; Laronne, Jonathan B.; Marr, Jeffrey D.G.
2010-01-01
Sand-bed rivers present unique challenges to accurate measurement of the bedload transport rate using the traditional direct sampling methods of direct traps (for example the Helley-Smith bedload sampler). The two major issues are: 1) over sampling of sand transport caused by “mining” of sand due to the flow disturbance induced by the presence of the sampler and 2) clogging of the mesh bag with sand particles reducing the hydraulic efficiency of the sampler. Indirect measurement methods hold promise in that unlike direct methods, no transport-altering flow disturbance near the bed occurs. The bedform velocimetry method utilizes a measure of the bedform geometry and the speed of bedform translation to estimate the bedload transport through mass balance. The bedform velocimetry method is readily applied for the estimation of bedload transport in large sand-bed rivers so long as prominent bedforms are present and the streamflow discharge is steady for long enough to provide sufficient bedform translation between the successive bathymetric data sets. Bedform velocimetry in small sandbed rivers is often problematic due to rapid variation within the hydrograph. The bottom-track bias feature of the acoustic Doppler current profiler (ADCP) has been utilized to accurately estimate the virtual velocities of sand-bed rivers. Coupling measurement of the virtual velocity with an accurate determination of the active depth of the streambed sediment movement is another method to measure bedload transport, which will be termed the “virtual velocity” method. Much research remains to develop methods and determine accuracy of the virtual velocity method in small sand-bed rivers.
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography
Jørgensen, J. S.; Sidky, E. Y.
2015-01-01
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.
Jørgensen, J S; Sidky, E Y
2015-06-13
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.
Comparison of DNA extraction methods for human gut microbial community profiling.
Lim, Mi Young; Song, Eun-Ji; Kim, Sang Ho; Lee, Jangwon; Nam, Young-Do
2018-03-01
The human gut harbors a vast range of microbes that have significant impact on health and disease. Therefore, gut microbiome profiling holds promise for use in early diagnosis and precision medicine development. Accurate profiling of the highly complex gut microbiome requires DNA extraction methods that provide sufficient coverage of the original community as well as adequate quality and quantity. We tested nine different DNA extraction methods using three commercial kits (TianLong Stool DNA/RNA Extraction Kit (TS), QIAamp DNA Stool Mini Kit (QS), and QIAamp PowerFecal DNA Kit (QP)) with or without additional bead-beating step using manual or automated methods and compared them in terms of DNA extraction ability from human fecal sample. All methods produced DNA in sufficient concentration and quality for use in sequencing, and the samples were clustered according to the DNA extraction method. Inclusion of bead-beating step especially resulted in higher degrees of microbial diversity and had the greatest effect on gut microbiome composition. Among the samples subjected to bead-beating method, TS kit samples were more similar to QP kit samples than QS kit samples. Our results emphasize the importance of mechanical disruption step for a more comprehensive profiling of the human gut microbiome. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
Operative colonoscopic endoscopy.
Van Gossum, A; Bourgeois, F; Gay, F; Lievens, P; Adler, M; Cremer, M
1992-01-01
There are several conditions where operative colonoscopy is useful. Acute colonic pseudo-obstruction or Ogilvie's syndrome is characterized by a acute distension of the colon. Although medical management may be sufficient in many cases, endoscopic decompression must be performed when colonic distension is greater than 12 cm. Insertion of decompression tube to avoid rapid recurrence seems to be adequate. In case of massive lower intestinal hemorrhage, colonoscopy seems to be more accurate than mesenteric angiography. Such endoscopic examination requires an experienced endoscopist. Colonoscopic polypectomy has become the standard method for removal of colonic polyps. Factors influencing the rate of complications have been studied. While the number of complications was very low, we have observed that all the major hemorrhages were immediate when the blended current was used, but delayed when the pure coagulation current was applied. Endoscopic laser photocavitation is a valuable palliative method treating rectal adenocarcinoma in well selected patients. Indeed, if the patients survive sufficiently long after initial therapy, it becomes increasingly difficult to achieve persistent palliation with laser therapy.
Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.
Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing
2017-01-01
Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.
An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore
Jackson, J.C.; Ericksent, G.E.
1997-01-01
Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.
An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore
John, C.; George, J.; Ericksen, E.
1997-01-01
Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.
von Guggenberg, Elisabeth; Penz, Barbara; Kemmler, Georg; Virgolini, Irene; Decristoforo, Clemens
2006-02-01
[99mTc-EDDA-HYNIC-D-Phe1,Tyr3]-octreotide (99mTc-EDDA-HYNIC-TOC) is an alternative radioligand for somatostatin receptor (SSTR) scintigraphy of neuroendocrine tumours. In order to allow a rapid and accurate determination of the quality in the clinical routine the aim of this study was to evaluate different methods of radiochemical purity (RCP) testing. Three different methods of RCP testing were compared: high-performance liquid chromatography (HPLC), thin layer chromatography (TLC) and minicolumn (Sep-Pak purification = SPE). HPLC was shown to be the most effective method for the quality control. The use of TLC and SPE is only recommended after sufficient practical labelling experience.
Modeling and simulation of high-speed wake flows
NASA Astrophysics Data System (ADS)
Barnhardt, Michael Daniel
High-speed, unsteady flows represent a unique challenge in computational hypersonics research. They are found in nearly all applications of interest, including the wakes of reentry vehicles, RCS jet interactions, and scramjet combustors. In each of these examples, accurate modeling of the flow dynamics plays a critical role in design performance. Nevertheless, literature surveys reveal that very little modern research effort has been made toward understanding these problems. The objective of this work is to synthesize current computational methods for high-speed flows with ideas commonly used to model low-speed, turbulent flows in order to create a framework by which we may reliably predict unsteady, hypersonic flows. In particular, we wish to validate the new methodology for the case of a turbulent wake flow at reentry conditions. Currently, heat shield designs incur significant mass penalties due to the large margins applied to vehicle afterbodies in lieu of a thorough understanding of the wake aerothermodynamics. Comprehensive validation studies are required to accurately quantify these modeling uncertainties. To this end, we select three candidate experiments against which we evaluate the accuracy of our methodology. The first set of experiments concern the Mars Science Laboratory (MSL) parachute system and serve to demonstrate that our implementation produces results consistent with prior studies at supersonic conditions. Second, we use the Reentry-F flight test to expand the application envelope to realistic flight conditions. Finally, in the last set of experiments, we examine a spherical capsule wind tunnel configuration in order to perform a more detailed analysis of a realistic flight geometry. In each case, we find that current 1st order in time, 2nd order in space upwind numerical methods are sufficiently accurate to predict statistical measurements: mean, RMS, standard deviation, and so forth. Further potential gains in numerical accuracy are demonstrated using a new class of flux evaluation schemes in combination with 2nd order dual-time stepping. For cases with transitional or turbulent Reynolds numbers, we show that the detached eddy simulation (DES) method holds clear advantage over heritage RANS methods. From this, we conclude that the current methodology is sufficient to predict heating of external, reentry-type applications within experimental uncertainty.
Lung vessel segmentation in CT images using graph-cuts
NASA Astrophysics Data System (ADS)
Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.
2016-03-01
Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi
2014-01-01
Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
Entropy from State Probabilities: Hydration Entropy of Cations
2013-01-01
Entropy is an important energetic quantity determining the progression of chemical processes. We propose a new approach to obtain hydration entropy directly from probability density functions in state space. We demonstrate the validity of our approach for a series of cations in aqueous solution. Extensive validation of simulation results was performed. Our approach does not make prior assumptions about the shape of the potential energy landscape and is capable of calculating accurate hydration entropy values. Sampling times in the low nanosecond range are sufficient for the investigated ionic systems. Although the presented strategy is at the moment limited to systems for which a scalar order parameter can be derived, this is not a principal limitation of the method. The strategy presented is applicable to any chemical system where sufficient sampling of conformational space is accessible, for example, by computer simulations. PMID:23651109
NASA Astrophysics Data System (ADS)
Beijen, Michiel A.; Voorhoeve, Robbert; Heertjes, Marcel F.; Oomen, Tom
2018-07-01
Vibration isolation is essential for industrial high-precision systems to suppress external disturbances. The aim of this paper is to develop a general identification approach to estimate the frequency response function (FRF) of the transmissibility matrix, which is a key performance indicator for vibration isolation systems. The major challenge lies in obtaining a good signal-to-noise ratio in view of a large system weight. A non-parametric system identification method is proposed that combines floor and shaker excitations. Furthermore, a method is presented to analyze the input power spectrum of the floor excitations, both in terms of magnitude and direction. In turn, the input design of the shaker excitation signals is investigated to obtain sufficient excitation power in all directions with minimum experiment cost. The proposed methods are shown to provide an accurate FRF of the transmissibility matrix in three relevant directions on an industrial active vibration isolation system over a large frequency range. This demonstrates that, despite their heavy weight, industrial vibration isolation systems can be accurately identified using this approach.
Accurate color synthesis of three-dimensional objects in an image
NASA Astrophysics Data System (ADS)
Xin, John H.; Shen, Hui-Liang
2004-05-01
Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.
Arienzo, Alyexandra; Sobze, Martin Sanou; Wadoum, Raoul Emeric Guetiya; Losito, Francesca; Colizzi, Vittorio; Antonini, Giovanni
2015-01-01
According to the World Health Organization (WHO) guidelines, “safe drinking-water must not represent any significant risk to health over a lifetime of consumption, including different sensitivities that may occur between life stages”. Traditional methods of water analysis are usually complex, time consuming and require an appropriately equipped laboratory, specialized personnel and expensive instrumentation. The aim of this work was to apply an alternative method, the Micro Biological Survey (MBS), to analyse for contaminants in drinking water. Preliminary experiments were carried out to demonstrate the linearity and accuracy of the MBS method and to verify the possibility of using the evaluation of total coliforms in 1 mL of water as a sufficient parameter to roughly though accurately determine water microbiological quality. The MBS method was then tested “on field” to assess the microbiological quality of water sources in the city of Douala (Cameroon, Central Africa). Analyses were performed on both dug and drilled wells in different periods of the year. Results confirm that the MBS method appears to be a valid and accurate method to evaluate the microbiological quality of many water sources and it can be of valuable aid in developing countries. PMID:26308038
Arienzo, Alyexandra; Sobze, Martin Sanou; Wadoum, Raoul Emeric Guetiya; Losito, Francesca; Colizzi, Vittorio; Antonini, Giovanni
2015-08-25
According to the World Health Organization (WHO) guidelines, "safe drinking-water must not represent any significant risk to health over a lifetime of consumption, including different sensitivities that may occur between life stages". Traditional methods of water analysis are usually complex, time consuming and require an appropriately equipped laboratory, specialized personnel and expensive instrumentation. The aim of this work was to apply an alternative method, the Micro Biological Survey (MBS), to analyse for contaminants in drinking water. Preliminary experiments were carried out to demonstrate the linearity and accuracy of the MBS method and to verify the possibility of using the evaluation of total coliforms in 1 mL of water as a sufficient parameter to roughly though accurately determine water microbiological quality. The MBS method was then tested "on field" to assess the microbiological quality of water sources in the city of Douala (Cameroon, Central Africa). Analyses were performed on both dug and drilled wells in different periods of the year. Results confirm that the MBS method appears to be a valid and accurate method to evaluate the microbiological quality of many water sources and it can be of valuable aid in developing countries.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Digital Signal Processing Methods for Ultrasonic Echoes.
Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard
2016-04-28
Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.
Children's Use of Information Quality to Establish Speaker Preferences
ERIC Educational Resources Information Center
Gillis, Randall L.; Nilsen, Elizabeth S.
2013-01-01
Knowledge transfer is most effective when speakers provide good quality (in addition to accurate) information. Two studies investigated whether preschool- (4-5 years old) and school-age (6-7 years old) children prefer speakers who provide sufficient information over those who provide insufficient (yet accurate) information. Children were provided…
Foresight begins with FMEA. Delivering accurate risk assessments.
Passey, R D
1999-03-01
If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.
Ultrasonic, needle, and carcass measurements for predicting chemical composition of lamb carcasses.
Ramsey, C B; Kirton, A H; Hogg, B; Dobbie, J L
1991-09-01
Three groups (n = 147) of New Zealand mixed breed lambs averaging 170 d of age and 31.7 kg in weight were killed after a diet of pasture to determine whether the total depth of soft tissues over the 12th rib 11 cm from the dorsal midline (GR) could be measured in live lambs with sufficient accuracy to warrant its use as a selection tool for breeding flock replacements. Relationships among live and carcass measurements and carcass chemical composition also were determined. An ultrasonic measurement of GR in the live lambs was a more accurate predictor of carcass GR (r = .87) and percentage carcass fat (r = .80) than was a measurement of GR made with a needle (r = .80 and .67, respectively). Both measurements were sufficiently accurate to permit culling of over-fat lambs from breeding flock replacement prospects. The best single indicator of percentage carcass fat (r = .87) was a shoulder fat measurement, followed closely by carcass GR (r = .85). Both were superior to USDA yield grade for estimating carcass chemical composition in these young, lightweight lambs. These two measurements also were most highly related to percentage carcass protein (r = -.78 and r = -.77, respectively). These results indicate possibilities for improving the method of evaluating the composition of U. S. lamb carcasses.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
NASA Astrophysics Data System (ADS)
Huang, Rongrong; Pomin, Vitor H.; Sharp, Joshua S.
2011-09-01
Improved methods for structural analyses of glycosaminoglycans (GAGs) are required to understand their functional roles in various biological processes. Major challenges in structural characterization of complex GAG oligosaccharides using liquid chromatography-mass spectrometry (LC-MS) include the accurate determination of the patterns of sulfation due to gas-phase losses of the sulfate groups upon collisional activation and inefficient on-line separation of positional sulfation isomers prior to MS/MS analyses. Here, a sequential chemical derivatization procedure including permethylation, desulfation, and acetylation was demonstrated to enable both on-line LC separation of isomeric mixtures of chondroitin sulfate (CS) oligosaccharides and accurate determination of sites of sulfation by MS n . The derivatized oligosaccharides have sulfate groups replaced with acetyl groups, which are sufficiently stable to survive MS n fragmentation and reflect the original sulfation patterns. A standard reversed-phase LC-MS system with a capillary C18 column was used for separation, and MS n experiments using collision-induced dissociation (CID) were performed. Our results indicate that the combination of this derivatization strategy and MS n methodology enables accurate identification of the sulfation isomers of CS hexasaccharides with either saturated or unsaturated nonreducing ends. Moreover, derivatized CS hexasaccharide isomer mixtures become separable by LC-MS method due to different positions of acetyl modifications.
Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-05-12
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Huang, Rongrong; Pomin, Vitor H.; Sharp, Joshua S.
2011-01-01
Improved methods for structural analyses of glycosaminoglycans (GAGs) are required to understand their functional roles in various biological processes. Major challenges in structural characterization of complex GAG oligosaccharides using liquid chromatography-mass spectrometry (LC-MS) include the accurate determination of the patterns of sulfation due to gas-phase losses of the sulfate groups upon collisional activation and inefficient on-line separation of positional sulfation isomers prior to MS/MS analyses. Here, a sequential chemical derivatization procedure including permethylation, desulfation, and acetylation was demonstrated to enable both on-line LC separation of isomeric mixtures of chondroitin sulfate (CS) oligosaccharides and accurate determination of sites of sulfation by MSn. The derivatized oligosaccharides have sulfate groups replaced with acetyl groups, which are sufficiently stable to survive MSn fragmentation and reflect the original sulfation patterns. A standard reversed-phase LC-MS system with a capillary C18 column was used for separation, and MSn experiments using collision-induced dissociation (CID) were performed. Our results indicate that the combination of this derivatization strategy and MSn methodology enables accurate identification of the sulfation isomers of CS hexasaccharides with either saturated or unsaturated nonreducing ends. Moreover, derivatized CS hexasaccharide isomer mixtures become separable by LC-MS method due to different positions of acetyl modifications. PMID:21953261
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Stress-free automatic sleep deprivation using air puffs
Gross, Brooks A.; Vanderheyden, William M.; Urpa, Lea M.; Davis, Devon E.; Fitzpatrick, Christopher J.; Prabhu, Kaustubh; Poe, Gina R.
2015-01-01
Background Sleep deprivation via gentle handling is time-consuming and personnel-intensive. New Method We present here an automated sleep deprivation system via air puffs. Implanted EMG and EEG electrodes were used to assess sleep/waking states in six male Sprague-Dawley rats. Blood samples were collected from an implanted intravenous catheter every 4 hours during the 12-hour light cycle on baseline, 8 hours of sleep deprivation via air puffs, and 8 hours of sleep deprivation by gentle handling days. Results The automated system was capable of scoring sleep and waking states as accurately as our offline version (~90% for sleep) and with sufficient speed to trigger a feedback response within an acceptable amount of time (1.76 s). Manual state scoring confirmed normal sleep on the baseline day and sleep deprivation on the two manipulation days (68% decrease in non-REM, 63% decrease in REM, and 74% increase in waking). No significant differences in levels of ACTH and corticosterone (stress hormones indicative of HPA axis activity) were found at any time point between baseline sleep and sleep deprivation via air puffs. Comparison with Existing Method There were no significant differences in ACTH or corticosterone concentrations between sleep deprivation by air puffs and gentle handling over the 8-hour period. Conclusions Our system accurately detects sleep and delivers air puffs to acutely deprive rats of sleep with sufficient temporal resolution during the critical 4-5 h post learning sleep-dependent memory consolidation period. The system is stress-free and a viable alternative to existing sleep deprivation techniques. PMID:26014662
Meise, Kristine; Mueller, Birte; Zein, Beate; Trillmich, Fritz
2014-01-01
Morphological features correlate with many life history traits and are therefore of high interest to behavioral and evolutionary biologists. Photogrammetry provides a useful tool to collect morphological data from species for which measurements are otherwise difficult to obtain. This method reduces disturbance and avoids capture stress. Using the Galapagos sea lion (Zalophus wollebaeki) as a model system, we tested the applicability of single-camera photogrammetry in combination with laser distance measurement to estimate morphological traits which may vary with an animal's body position. We assessed whether linear morphological traits estimated by photogrammetry can be used to estimate body length and mass. We show that accurate estimates of body length (males: ±2.0%, females: ±2.6%) and reliable estimates of body mass are possible (males: ±6.8%, females: 14.5%). Furthermore, we developed correction factors that allow the use of animal photos that diverge somewhat from a flat-out position. The product of estimated body length and girth produced sufficiently reliable estimates of mass to categorize individuals into 10 kg-classes of body mass. Data of individuals repeatedly photographed within one season suggested relatively low measurement errors (body length: 2.9%, body mass: 8.1%). In order to develop accurate sex- and age-specific correction factors, a sufficient number of individuals from both sexes and from all desired age classes have to be captured for baseline measurements. Given proper validation, this method provides an excellent opportunity to collect morphological data for large numbers of individuals with minimal disturbance.
Meise, Kristine; Mueller, Birte; Zein, Beate; Trillmich, Fritz
2014-01-01
Morphological features correlate with many life history traits and are therefore of high interest to behavioral and evolutionary biologists. Photogrammetry provides a useful tool to collect morphological data from species for which measurements are otherwise difficult to obtain. This method reduces disturbance and avoids capture stress. Using the Galapagos sea lion (Zalophus wollebaeki) as a model system, we tested the applicability of single-camera photogrammetry in combination with laser distance measurement to estimate morphological traits which may vary with an animal’s body position. We assessed whether linear morphological traits estimated by photogrammetry can be used to estimate body length and mass. We show that accurate estimates of body length (males: ±2.0%, females: ±2.6%) and reliable estimates of body mass are possible (males: ±6.8%, females: 14.5%). Furthermore, we developed correction factors that allow the use of animal photos that diverge somewhat from a flat-out position. The product of estimated body length and girth produced sufficiently reliable estimates of mass to categorize individuals into 10 kg-classes of body mass. Data of individuals repeatedly photographed within one season suggested relatively low measurement errors (body length: 2.9%, body mass: 8.1%). In order to develop accurate sex- and age-specific correction factors, a sufficient number of individuals from both sexes and from all desired age classes have to be captured for baseline measurements. Given proper validation, this method provides an excellent opportunity to collect morphological data for large numbers of individuals with minimal disturbance. PMID:24987983
Classification of HCV and HIV-1 Sequences with the Branching Index
Hraber, Peter; Kuiken, Carla; Waugh, Mark; Geer, Shaun; Bruno, William J.; Leitner, Thomas
2009-01-01
SUMMARY Classification of viral sequences should be fast, objective, accurate, and reproducible. Most methods that classify sequences use either pairwise distances or phylogenetic relations, but cannot discern when a sequence is unclassifiable. The branching index (BI) combines distance and phylogeny methods to compute a ratio that quantifies how closely a query sequence clusters with a subtype clade. In the hypothesis-testing framework of statistical inference, the BI is compared with a threshold to test whether sufficient evidence exists for the query sequence to be classified among known sequences. If above the threshold, the null hypothesis of no support for the subtype relation is rejected and the sequence is taken as belonging to the subtype clade with which it clusters on the tree. This study evaluates statistical properties of the branching index for subtype classification in HCV and HIV-1. Pairs of BI values with known positive and negative test results were computed from 10,000 random fragments of reference alignments. Sampled fragments were of sufficient length to contain phylogenetic signal that groups reference sequences together properly into subtype clades. For HCV, a threshold BI of 0.71 yields 95.1% agreement with reference subtypes, with equal false positive and false negative rates. For HIV-1, a threshold of 0.66 yields 93.5% agreement. Higher thresholds can be used where lower false positive rates are required. In synthetic recombinants, regions without breakpoints are recognized accurately; regions with breakpoints do not uniquely represent any known subtype. Web-based services for viral subtype classification with the branching index are available online. PMID:18753218
Schroen, Anneke T; Petroni, Gina R; Wang, Hongkun; Gray, Robert; Wang, Xiaofei F; Cronin, Walter; Sargent, Daniel J; Benedetti, Jacqueline; Wickerham, Donald L; Djulbegovic, Benjamin; Slingluff, Craig L
2014-01-01
Background A major challenge for randomized phase III oncology trials is the frequent low rates of patient enrollment, resulting in high rates of premature closure due to insufficient accrual. Purpose We conducted a pilot study to determine the extent of trial closure due to poor accrual, feasibility of identifying trial factors associated with sufficient accrual, impact of redesign strategies on trial accrual, and accrual benchmarks designating high failure risk in the clinical trials cooperative group (CTCG) setting. Methods A subset of phase III trials opened by five CTCGs between August 1991 and March 2004 was evaluated. Design elements, experimental agents, redesign strategies, and pretrial accrual assessment supporting accrual predictions were abstracted from CTCG documents. Percent actual/predicted accrual rate averaged per month was calculated. Trials were categorized as having sufficient or insufficient accrual based on reason for trial termination. Analyses included univariate and bivariate summaries to identify potential trial factors associated with accrual sufficiency. Results Among 40 trials from one CTCG, 21 (52.5%) trials closed due to insufficient accrual. In 82 trials from five CTCGs, therapeutic trials accrued sufficiently more often than nontherapeutic trials (59% vs 27%, p = 0.05). Trials including pretrial accrual assessment more often achieved sufficient accrual than those without (67% vs 47%, p = 0.08). Fewer exclusion criteria, shorter consent forms, other CTCG participation, and trial design simplicity were not associated with achieving sufficient accrual. Trials accruing at a rate much lower than predicted (<35% actual/predicted accrual rate) were consistently closed due to insufficient accrual. Limitations This trial subset under-represents certain experimental modalities. Data sources do not allow accounting for all factors potentially related to accrual success. Conclusion Trial closure due to insufficient accrual is common. Certain trial design factors appear associated with attaining sufficient accrual. Defining accrual benchmarks for early trial termination or redesign is feasible, but better accrual prediction methods are critically needed. Future studies should focus on identifying trial factors that allow more accurate accrual predictions and strategies that can salvage open trials experiencing slow accrual. PMID:20595245
Wear, Keith A
2014-04-01
In through-transmission interrogation of cancellous bone, two longitudinal pulses ("fast" and "slow" waves) may be generated. Fast and slow wave properties convey information about material and micro-architectural characteristics of bone. However, these properties can be difficult to assess when fast and slow wave pulses overlap in time and frequency domains. In this paper, two methods are applied to decompose signals into fast and slow waves: bandlimited deconvolution and modified least-squares Prony's method with curve-fitting (MLSP + CF). The methods were tested in plastic and Zerdine(®) samples that provided fast and slow wave velocities commensurate with velocities for cancellous bone. Phase velocity estimates were accurate to within 6 m/s (0.4%) (slow wave with both methods and fast wave with MLSP + CF) and 26 m/s (1.2%) (fast wave with bandlimited deconvolution). Midband signal loss estimates were accurate to within 0.2 dB (1.7%) (fast wave with both methods), and 1.0 dB (3.7%) (slow wave with both methods). Similar accuracies were found for simulations based on fast and slow wave parameter values published for cancellous bone. These methods provide sufficient accuracy and precision for many applications in cancellous bone such that experimental error is likely to be a greater limiting factor than estimation error.
Non-steady state modelling of wheel-rail contact problem
NASA Astrophysics Data System (ADS)
Guiral, A.; Alonso, A.; Baeza, L.; Giménez, J. G.
2013-01-01
Among all the algorithms to solve the wheel-rail contact problem, Kalker's FastSim has become the most useful computation tool since it combines a low computational cost and enough precision for most of the typical railway dynamics problems. However, some types of dynamic problems require the use of a non-steady state analysis. Alonso and Giménez developed a non-stationary method based on FastSim, which provides both, sufficiently accurate results and a low computational cost. However, it presents some limitations; the method is developed for one time-dependent creepage and its accuracy for varying normal forces has not been checked. This article presents the required changes in order to deal with both problems and compares its results with those given by Kalker's Variational Method for rolling contact.
Automatic detection of retinal anatomy to assist diabetic retinopathy screening.
Fleming, Alan D; Goatman, Keith A; Philip, Sam; Olson, John A; Sharp, Peter F
2007-01-21
Screening programmes for diabetic retinopathy are being introduced in the United Kingdom and elsewhere. These require large numbers of retinal images to be manually graded for the presence of disease. Automation of image grading would have a number of benefits. However, an important prerequisite for automation is the accurate location of the main anatomical features in the image, notably the optic disc and the fovea. The locations of these features are necessary so that lesion significance, image field of view and image clarity can be assessed. This paper describes methods for the robust location of the optic disc and fovea. The elliptical form of the major retinal blood vessels is used to obtain approximate locations, which are refined based on the circular edge of the optic disc and the local darkening at the fovea. The methods have been tested on 1056 sequential images from a retinal screening programme. Positional accuracy was better than 0.5 of a disc diameter in 98.4% of cases for optic disc location, and in 96.5% of cases for fovea location. The methods are sufficiently accurate to form an important and effective component of an automated image grading system for diabetic retinopathy screening.
Automatic detection of retinal anatomy to assist diabetic retinopathy screening
NASA Astrophysics Data System (ADS)
Fleming, Alan D.; Goatman, Keith A.; Philip, Sam; Olson, John A.; Sharp, Peter F.
2007-01-01
Screening programmes for diabetic retinopathy are being introduced in the United Kingdom and elsewhere. These require large numbers of retinal images to be manually graded for the presence of disease. Automation of image grading would have a number of benefits. However, an important prerequisite for automation is the accurate location of the main anatomical features in the image, notably the optic disc and the fovea. The locations of these features are necessary so that lesion significance, image field of view and image clarity can be assessed. This paper describes methods for the robust location of the optic disc and fovea. The elliptical form of the major retinal blood vessels is used to obtain approximate locations, which are refined based on the circular edge of the optic disc and the local darkening at the fovea. The methods have been tested on 1056 sequential images from a retinal screening programme. Positional accuracy was better than 0.5 of a disc diameter in 98.4% of cases for optic disc location, and in 96.5% of cases for fovea location. The methods are sufficiently accurate to form an important and effective component of an automated image grading system for diabetic retinopathy screening.
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Komar, Debra
2003-07-01
In July 1995, the town of Srebrenica fell to Bosnian-Serb forces, leaving more than 7000 Muslim men missing and presumed dead. Anthropologists participating in the identification process were faced with a unique problem: the victims appeared identical. All were adult males of a single ethnic group. Decomposition as well as the absence of antemortem (AM) medical and dental records confounded identification. As of December 1999, only 63 men had been positively identified using DNA, personal effects, and identification papers. Are current anthropological methods of sex, age, and stature estimation and AM trauma assessment sufficiently accurate to differentiate the remaining victims and aid in their identification? Comparisons of relative-reported AM information and postmortem examination records for 59 of the 63 identified individuals indicated that while all individuals were sexed correctly, only 42.4% were accurately aged and 29.4% had a stature estimate that included their reported height.
Research about the high precision temperature measurement
NASA Astrophysics Data System (ADS)
Lin, J.; Yu, J.; Zhu, X.; Zeng, Z.; Deng, Y.
2012-12-01
High precision temperature control system is one of most important support conditions for tunable birefringent filter.As the first step,we researched some high precision temperature measurement methods for it. Firstly, circuits with a 24 bit ADC as the sensor's reader were carefully designed; Secondly, an ARM porcessor is used as the centrol processing unit, it provides sufficient reading and procesing ability; Thirdly, three kinds of sensors, PT100, Dale 01T1002-5 thermistor, Wheatstone bridge(constructed by pure copper and manganin) as the senor of the temperature were tested respectively. The resolution of the measurement with these three kinds of sensors are all better than 0.001 that's enough for 0.01 stability temperature control. Comparatively, Dale 01T1002-5 thermistor could get the most accurate temperature of the key point, Wheatstone bridge could get the most accurate mean temperature of the whole layer, both of them will be used in our futrue temperature controll system.
Bai, Penggang; Du, Min; Ni, Xiaolei; Ke, Dongzhong; Tong, Tong
2017-01-01
The combination external-beam radiotherapy and high-dose-rate brachytherapy is a standard form of treatment for patients with locally advanced uterine cervical cancer. Personalized radiotherapy in cervical cancer requires efficient and accurate dose planning and assessment across these types of treatment. To achieve radiation dose assessment, accurate mapping of the dose distribution from HDR-BT onto EBRT is extremely important. However, few systems can achieve robust dose fusion and determine the accumulated dose distribution during the entire course of treatment. We have therefore developed a toolbox (FZUImageReg), which is a user-friendly dose fusion system based on hybrid image registration for radiation dose assessment in cervical cancer radiotherapy. The main part of the software consists of a collection of medical image registration algorithms and a modular design with a user-friendly interface, which allows users to quickly configure, test, monitor, and compare different registration methods for a specific application. Owing to the large deformation, the direct application of conventional state-of-the-art image registration methods is not sufficient for the accurate alignment of EBRT and HDR-BT images. To solve this problem, a multi-phase non-rigid registration method using local landmark-based free-form deformation is proposed for locally large deformation between EBRT and HDR-BT images, followed by intensity-based free-form deformation. With the transformation, the software also provides a dose mapping function according to the deformation field. The total dose distribution during the entire course of treatment can then be presented. Experimental results clearly show that the proposed system can achieve accurate registration between EBRT and HDR-BT images and provide radiation dose warping and fusion results for dose assessment in cervical cancer radiotherapy in terms of high accuracy and efficiency. PMID:28388623
Neural Activity Reveals Preferences Without Choices
Smith, Alec; Bernheim, B. Douglas; Camerer, Colin
2014-01-01
We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468
Chemistry by Way of Density Functional Theory
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Partridge, Harry; Langohff, Stephen R.; Arnold, James O. (Technical Monitor)
1996-01-01
In this work we demonstrate that density functional theory (DFT) methods make an important contribution to understanding chemical systems and are an important additional method for the computational chemist. We report calibration calculations obtained with different functionals for the 55 G2 molecules to justify our selection of the B3LYP functional. We show that accurate geometries and vibrational frequencies obtained at the B3LYP level can be combined with traditional methods to simplify the calculation of accurate heats of formation. We illustrate the application of the B3LYP approach to a variety of chemical problems from the vibrational frequencies of polycyclic aromatic hydrocarbons to transition metal systems. We show that the B3LYP method typically performs better than the MP2 method at a significantly lower computational cost. Thus the B3LYP method allows us to extend our studies to much larger systems while maintaining a high degree of accuracy. We show that for transition metal systems, the B3LYP bond energies are typically of sufficient accuracy that they can be used to explain experimental trends and even differentiate between different experimental values. We show that for boron clusters the B3LYP energetics are not as good as for many of the other systems presented, but even in this case the B3LYP approach is able to help understand the experimental trends.
Simplified method for detecting tritium contamination in plants and soil
Andraski, Brian J.; Sandstrom, M.W.; Michel, R.L.; Radyk, J.C.; Stonestrom, David A.; Johnson, M.J.; Mayers, C.J.
2003-01-01
Cost-effective methods are needed to identify the presence and distribution of tritium near radioactive waste disposal and other contaminated sites. The objectives of this study were to (i) develop a simplified sample preparation method for determining tritium contamination in plants and (ii) determine if plant data could be used as an indicator of soil contamination. The method entailed collection and solar distillation of plant water from foliage, followed by filtration and adsorption of scintillation-interfering constituents on a graphite-based solid phase extraction (SPE) column. The method was evaluated using samples of creosote bush [Larrea tridentata (Sessé & Moc. ex DC.) Coville], an evergreen shrub, near a radioactive disposal area in the Mojave Desert. Laboratory tests showed that a 2-g SPE column was necessary and sufficient for accurate determination of known tritium concentrations in plant water. Comparisons of tritium concentrations in plant water determined with the solar distillation–SPE method and the standard (and more laborious) toluene-extraction method showed no significant difference between methods. Tritium concentrations in plant water and in water vapor of root-zone soil also showed no significant difference between methods. Thus, the solar distillation–SPE method provides a simple and cost-effective way to identify plant and soil contamination. The method is of sufficient accuracy to facilitate collection of plume-scale data and optimize placement of more sophisticated (and costly) monitoring equipment at contaminated sites. Although work to date has focused on one desert plant, the approach may be transferable to other species and environments after site-specific experiments.
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
Accurate calibration and control of relative humidity close to 100% by X-raying a DOPC multilayer
Ma, Yicong; Ghosh, Sajal K.; Bera, Sambhunath; ...
2015-01-01
Here in this study, we have designed a compact sample chamber that can achieve accurate and continuous control of the relative humidity (RH) in the vicinity of 100%. A 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) multilayer can be used as a humidity sensor by measuring its inter-layer repeat distance (d-spacing) via X-ray diffraction. We convert from DOPC d-spacing to RH according to a theory given in the literature and previously measured data of DOPC multilamellar vesicles in polyvinylpyrrolidone (PVP) solutions. This curve can be used for calibration of RH close to 100%, a regime where conventional sensors do not have sufficient accuracy. We demonstratemore » that this control method can provide RH accuracies of 0.1 to 0.01%, which is a factor of 10–100 improvement compared to existing methods of humidity control. Our method provides fine tuning capability of RH continuously for a single sample, whereas the PVP solution method requires new samples to be made for each PVP concentration. The use of this cell also potentially removes the need for an X-ray or neutron beam to pass through bulk water if one wishes to work close to biologically relevant conditions of nearly 100% RH.« less
al-Owais, A.; al-Suwaidi, K.; Amiri, N.; Carter, A. O.; Hossain, M. M.; Sheek-Hussein, M. M.
2000-01-01
INTRODUCTION: Hepatitis B is of major public health importance. Accurate information on its occurrence, with particular reference to the prevalence of immunity and chronic infection (marked by the presence of hepatitis B core antibody and surface antigen, respectively, in serum), is essential for planning public health programmes for the control of the disease. The generation of marker prevalence data through serological surveys is costly and time-consuming. The present study in Al Ain Medical District, United Arab Emirates, investigated the possibility of obtaining sufficiently accurate marker prevalence estimates from existing data to plan public health programmes. METHODS: Two antenatal screening databases, one student serological survey database, one immunization programme database and one pre-marriage screening database containing information on marker prevalence were identified. Epidemiological data were abstracted from these databases and analysed. RESULTS: The data showed that the prevalence of hepatitis B surface antigen and the prevalence of core antibody in young citizens in 1998 were approximately 2% and 14% respectively, that any immunization campaign aimed at citizens of the United Arab Emirates should target teenagers as they had the highest risk of acquiring the disease, and that pre-immunization screening of young adults would be wasteful. However, the data did not yield information on the prevalence of hepatitis B surface antigen and core antibody in other population subgroups of public health significance. DISCUSSION: While data generated by the study are sufficient to support a hepatitis B immunization programme targeted at teenaged citizens, more accurate data, generated by a well-designed serological survey, would be essential for optimal public health planning. PMID:11143192
Kozma, Bence; Hirsch, Edit; Gergely, Szilveszter; Párta, László; Pataki, Hajnalka; Salgó, András
2017-10-25
In this study, near-infrared (NIR) and Raman spectroscopy were compared in parallel to predict the glucose concentration of Chinese hamster ovary cell cultivations. A shake flask model system was used to quickly generate spectra similar to bioreactor cultivations therefore accelerating the development of a working model prior to actual cultivations. Automated variable selection and several pre-processing methods were tested iteratively during model development using spectra from six shake flask cultivations. The target was to achieve the lowest error of prediction for the glucose concentration in two independent shake flasks. The best model was then used to test the scalability of the two techniques by predicting spectra of a 10l and a 100l scale bioreactor cultivation. The NIR spectroscopy based model could follow the trend of the glucose concentration but it was not sufficiently accurate for bioreactor monitoring. On the other hand, the Raman spectroscopy based model predicted the concentration of glucose in both cultivation scales sufficiently accurately with an error around 4mM (0.72g/l), that is satisfactory for the on-line bioreactor monitoring purposes of the biopharma industry. Therefore, the shake flask model system was proven to be suitable for scalable spectroscopic model development. Copyright © 2017 Elsevier B.V. All rights reserved.
Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer
2008-01-01
Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.
Rational approach for assumed stress finite elements
NASA Technical Reports Server (NTRS)
Pian, T. H. H.; Sumihara, K.
1984-01-01
A new method for the formulation of hybrid elements by the Hellinger-Reissner principle is established by expanding the essential terms of the assumed stresses as complete polynomials in the natural coordinates of the element. The equilibrium conditions are imposed in a variational sense through the internal displacements which are also expanded in the natural co-ordinates. The resulting element possesses all the ideal qualities, i.e. it is invariant, it is less sensitive to geometric distortion, it contains a minimum number of stress parameters and it provides accurate stress calculations. For the formulation of a 4-node plane stress element, a small perturbation method is used to determine the equilibrium constraint equations. The element has been proved to be always rank sufficient.
Method and apparatus for radiometer star sensing
NASA Technical Reports Server (NTRS)
Wilcox, Jack E. (Inventor)
1989-01-01
A method and apparatus for determining the orientation of the optical axis of radiometer instruments mounted on a satellite involves a star sensing technique. The technique makes use of a servo system to orient the scan mirror of the radiometer into the path of a sufficiently bright star such that motion of the satellite will cause the star's light to impinge on the scan mirror and then the visible light detectors of the radiometer. The light impinging on the detectors is converted to an electronic signal whereby, knowing the position of the star relative to appropriate earth coordinates and the time of transition of the star image through the detector array, the orientation of the optical axis of the instrument relative to earth coordinates can be accurately determined.
Grimaldi, F.S.
1957-01-01
This paper presents a selective iodate separation of thorium from nitric acid medium containing d-tartaric acid and hydrogen peroxide. The catalytic decomposition of hydrogen peroxide is prevented by the use of 8quinolinol. A few micrograms of thorium are separated sufficiently clean from 30 mg. of such oxides as cerium, zirconium, titanium, niobium, tantalum, scandium, or iron with one iodate precipitation to allow an accurate determination of thorium with the thoronmesotartaric acid spectrophotometric method. The method is successful for the determination of 0.001% or more of thorium dioxide in silicate rocks and for 0.01% or more in black sand, monazite, thorite, thorianite, eschynite, euxenite, and zircon.
Examination of products of conception terminated after prenatal investigation.
Knowles, S
1986-01-01
A large number of district general hospitals have access to diagnostic ultrasonography and other methods of prenatal diagnosis, resulting in an increased supply of freshly terminated malformed fetuses to general histopathology departments, and there is now more open discussion of malformation and greater concern over fetal wastage. General pathologists are therefore under greater pressure to produce complete and detailed descriptions of a wide range of often complex anomalies. The dismissal of specimens as "multiple congenital anomalies" is becoming increasingly unacceptable to couples who wish to embark on further pregnancies and to their medical attendants. As in other fields an understanding of the methods and terminology in clinical use and a consistent diagnostic approach should help pathologists to extract sufficient information for accurate counselling. Images PMID:3537013
Calibration of a monochromator using a lambdameter
NASA Astrophysics Data System (ADS)
Schwarzmaier, T.; Baumgartner, A.; Gege, P.; Lenhard, K.
2013-10-01
The standard procedure for wavelength calibration of monochromators in the visible and near infrared wavelength range uses low-pressure gas discharge lamps with spectrally well-known emission lines as primary wavelength standard. The calibration of a monochromator in the wavelength range of 350 to 2500 nm usually takes some days due to the huge number of single measurements necessary. The useable emission lines are not for all purposes sufficiently dense and at the appropriate wavelengths. To get faster results for freely selectable wavelengths, a new method for monochromator characterization was tested. It is based on measurements with a lambdameter taken at equidistant angles distributed over the grating's entire angular range. This method provides a very accurate calibration and needs only about two hours of measuring time.
NASA Astrophysics Data System (ADS)
Brodyn, M. S.; Starkov, V. N.
2007-07-01
It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.
A new cation-exchange method for accurate field speciation of hexavalent chromium
Ball, J.W.; McCleskey, R. Blaine
2003-01-01
A new method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The method consists of passing a water sample through strong acid cation-exchange resin at the field site, where Cr(III) is retained while Cr(VI) passes into the effluent and is preserved for later determination. The method is simple, rapid, portable, and accurate, and makes use of readily available, inexpensive materials. Cr(VI) concentrations are determined later in the laboratory using any elemental analysis instrument sufficiently sensitive to measure the Cr(VI) concentrations of interest. The new method allows measurement of Cr(VI) concentrations as low as 0.05 ??g 1-1, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. Cr(VI) can be separated from Cr(III) between pH 2 and 11 at Cr(III)/Cr(VI) concentration ratios as high as 1000. The new method has demonstrated excellent comparability with two commonly used methods, the Hach Company direct colorimetric method and USEPA method 218.6. The new method is superior to the Hach direct colorimetric method owing to its relative sensitivity and simplicity. The new method is superior to USEPA method 218.6 in the presence of Fe(II) concentrations up to 1 mg 1-1 and Fe(III) concentrations up to 10 mg 1-1. Time stability of preserved samples is a significant advantage over the 24-h time constraint specified for USEPA method 218.6.
Analysis of the electromagnetic scattering from an inlet geometry with lossy walls
NASA Technical Reports Server (NTRS)
Myung, N. H.; Pathak, P. H.; Chunang, C. D.
1985-01-01
One of the primary goals is to develop an approximate but sufficiently accurate analysis for the problem of electromagnetic (EM) plane wave scattering by an open ended, perfectly-conducting, semi-infinite hollow circular waveguide (or duct) with a thin, uniform layer of lossy or absorbing material on its inner wall, and with a simple termination inside. The less difficult but useful problem of the EM scattering by a two-dimensional (2-D), semi-infinite parallel plate waveguide with an impedance boundary condition on the inner walls was chosen initially for analysis. The impedance boundary condition in this problem serves to model a thin layer of lossy dielectric/ferrite coating on the otherwise perfectly-conducting interior waveguide walls. An approximate but efficient and accurate ray solution was obtained recently. That solution is presently being extended to the case of a moderately thick dielectric/ferrite coating on the walls so as to be valid for situations where the impedance boundary condition may not remain sufficiently accurate.
Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle
NASA Astrophysics Data System (ADS)
Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.
2018-05-01
Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Ab initio structure determination from prion nanocrystals at atomic resolution by MicroED
Sawaya, Michael R.; Rodriguez, Jose; Cascio, Duilio; Collazo, Michael J.; Shi, Dan; Reyes, Francis E.; Gonen, Tamir; Eisenberg, David S.
2016-01-01
Electrons, because of their strong interaction with matter, produce high-resolution diffraction patterns from tiny 3D crystals only a few hundred nanometers thick in a frozen-hydrated state. This discovery offers the prospect of facile structure determination of complex biological macromolecules, which cannot be coaxed to form crystals large enough for conventional crystallography or cannot easily be produced in sufficient quantities. Two potential obstacles stand in the way. The first is a phenomenon known as dynamical scattering, in which multiple scattering events scramble the recorded electron diffraction intensities so that they are no longer informative of the crystallized molecule. The second obstacle is the lack of a proven means of de novo phase determination, as is required if the molecule crystallized is insufficiently similar to one that has been previously determined. We show with four structures of the amyloid core of the Sup35 prion protein that, if the diffraction resolution is high enough, sufficiently accurate phases can be obtained by direct methods with the cryo-EM method microelectron diffraction (MicroED), just as in X-ray diffraction. The success of these four experiments dispels the concern that dynamical scattering is an obstacle to ab initio phasing by MicroED and suggests that structures of novel macromolecules can also be determined by direct methods. PMID:27647903
A single camera roentgen stereophotogrammetry method for static displacement analysis.
Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G
2000-06-01
A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.
NASA Technical Reports Server (NTRS)
Nielsen, Jack N; Kaattari, George E; Drake, William C
1952-01-01
A simple method is presented for estimating lift, pitching-moment, and hinge-moment characteristics of all-movable wings in the presence of a body as well as the characteristics of wing-body combinations employing such wings. In general, good agreement between the method and experiment was obtained for the lift and pitching moment of the entire wing-body combination and for the lift of the wing in the presence of the body. The method is valid for moderate angles of attack, wing deflection angles, and width of gap between wing and body. The method of estimating hinge moment was not considered sufficiently accurate for triangular all-movable wings. An alternate procedure is proposed based on the experimental moment characteristics of the wing alone. Further theoretical and experimental work is required to substantiate fully the proposed procedure.
NASA Astrophysics Data System (ADS)
Nakashima, Hiroshi; Takatsu, Yuzuru
The goal of this study is to develop a practical and fast simulation tool for soil-tire interaction analysis, where finite element method (FEM) and discrete element method (DEM) are coupled together, and which can be realized on a desktop PC. We have extended our formerly proposed dynamic FE-DE method (FE-DEM) to include practical soil-tire system interaction, where not only the vertical sinkage of a tire, but also the travel of a driven tire was considered. Numerical simulation by FE-DEM is stable, and the relationships between variables, such as load-sinkage and sinkage-travel distance, and the gross tractive effort and running resistance characteristics, are obtained. Moreover, the simulation result is accurate enough to predict the maximum drawbar pull for a given tire, once the appropriate parameter values are provided. Therefore, the developed FE-DEM program can be applied with sufficient accuracy to interaction problems in soil-tire systems.
Automated detection of irradiated food with the comet assay.
Verbeek, F; Koppen, G; Schaeken, B; Verschaeve, L
2008-01-01
Food irradiation is the process of exposing food to ionising radiation in order to disinfect, sanitise, sterilise and preserve food or to provide insect disinfestation. Irradiated food should be adequately labelled according to international and national guidelines. In many countries, there are furthermore restrictions to the product-specific maximal dose that can be administered. Therefore, there is a need for methods that allow detection of irradiated food, as well as for methods that provide a reliable dose estimate. In recent years, the comet assay was proposed as a simple, rapid and inexpensive method to fulfil these goals, but further research is required to explore the full potential of this method. In this paper we describe the use of an automated image analysing system to measure DNA comets which allow the discrimination between irradiated and non-irradiated food as well as the set-up of standard dose-response curves, and hence a sufficiently accurate dose estimation.
Jalas, S.; Dornmair, I.; Lehe, R.; ...
2017-03-20
Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less
Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction
NASA Astrophysics Data System (ADS)
Uh, Jinsoo; Ayaz Khan, M.; Hua, Chiaho
2016-11-01
This study aimed to develop a practical and accurate 4-dimensional (4D) magnetic resonance imaging (MRI) method using a non-navigator, image-based internal respiratory surrogate derived by dimensionality reduction (DR). The use of DR has been previously suggested but not implemented for reconstructing 4D MRI, despite its practical advantages. We compared multiple image-acquisition schemes and refined a retrospective-sorting process to optimally implement a DR-derived surrogate. The comparison included an unconventional scheme that acquires paired slices alternately to mitigate the internal surrogate’s dependency on a specific slice location. We introduced ‘target-oriented sorting’, as opposed to conventional binning, to quantify the coherence in retrospectively sorted images, thereby determining the minimal scan time needed for sufficient coherence. This study focused on evaluating the proposed method using digital phantoms which provided unequivocal gold standard. The evaluation indicated that the DR-based respiratory surrogate is highly accurate: the error in amplitude percentile of the surrogate signal was less than 5% with the optimal scheme. Acquiring alternating paired slices was superior to the conventional scheme of acquiring individual slices; the advantage of the unconventional scheme was more pronounced when a substantial phase shift occurred across slice locations. The analysis of coherence across sorted images confirmed the advantage of higher sampling efficiencies in non-navigator respiratory surrogates. We determined that a scan time of 20 s per imaging slice was sufficient to achieve a mean coherence error of less than 1% for the tested respiratory patterns. The clinical applicability of the proposed 4D MRI has been demonstrated with volunteers and patients. The diaphragm motion in 4D MRI was consistent with that in dynamic 2D imaging which was regarded as the gold standard (difference within 1.8 mm on average).
Computing camera heading: A study
NASA Astrophysics Data System (ADS)
Zhang, John Jiaxiang
2000-08-01
An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.
A Modeling Approach to Global Land Surface Monitoring with Low Resolution Satellite Imaging
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Dungan, Jennifer; Livingston, Gerry P.; Gore, Warren J. (Technical Monitor)
1998-01-01
The effects of changing land use/land cover on global climate and ecosystems due to greenhouse gas emissions and changing energy and nutrient exchange rates are being addressed by federal programs such as NASA's Mission to Planet Earth (MTPE) and by international efforts such as the International Geosphere-Biosphere Program (IGBP). The quantification of these effects depends on accurate estimates of the global extent of critical land cover types such as fire scars in tropical savannas and ponds in Arctic tundra. To address the requirement for accurate areal estimates, methods for producing regional to global maps with satellite imagery are being developed. The only practical way to produce maps over large regions of the globe is with data of coarse spatial resolution, such as Advanced Very High Resolution Radiometer (AVHRR) weather satellite imagery at 1.1 km resolution or European Remote-Sensing Satellite (ERS) radar imagery at 100 m resolution. The accuracy of pixel counts as areal estimates is in doubt, especially for highly fragmented cover types such as fire scars and ponds. Efforts to improve areal estimates from coarse resolution maps have involved regression of apparent area from coarse data versus that from fine resolution in sample areas, but it has proven difficult to acquire sufficient fine scale data to develop the regression. A method for computing accurate estimates from coarse resolution maps using little or no fine data is therefore needed.
Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.
Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian
2015-10-05
The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.
Marzinek, Jan K.; Lakshminarayanan, Rajamani; Goh, Eunice; Huber, Roland G.; Panzade, Sadhana; Verma, Chandra; Bond, Peter J.
2016-01-01
Conformational changes in the envelope proteins of flaviviruses help to expose the highly conserved fusion peptide (FP), a region which is critical to membrane fusion and host cell infection, and which represents a significant target for antiviral drugs and antibodies. In principle, extended timescale atomic-resolution simulations may be used to characterize the dynamics of such peptides. However, the resultant accuracy is critically dependent upon both the underlying force field and sufficient conformational sampling. In the present study, we report a comprehensive comparison of three simulation methods and four force fields comprising a total of more than 40 μs of sampling. Additionally, we describe the conformational landscape of the FP fold across all flavivirus family members. All investigated methods sampled conformations close to available X-ray structures, but exhibited differently populated ensembles. The best force field / sampling combination was sufficiently accurate to predict that the solvated peptide fold is less ordered than in the crystallographic state, which was subsequently confirmed via circular dichroism and spectrofluorometric measurements. Finally, the conformational landscape of a mutant incapable of membrane fusion was significantly shallower than wild-type variants, suggesting that dynamics should be considered when therapeutically targeting FP epitopes. PMID:26785994
LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W
2008-01-01
Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less
Lauriks, Steve; de Wit, Matty A S; Buster, Marcel C A; Fassaert, Thijs J L; van Wifferen, Ron; Klazinga, Niek S
2014-10-01
The current study set out to develop a decision support tool based on the Self-Sufficiency Matrix (Dutch version; SSM-D) for the clinical decision to allocate homeless people to the public mental health care system at the central access point of public mental health care in Amsterdam, The Netherlands. Logistic regression and receiver operating characteristic-curve analyses were used to model professional decisions and establish four decision categories based on SSM-D scores from half of the research population (Total n = 612). The model and decision categories were found to be accurate and reliable in predicting professional decisions in the second half of the population. Results indicate that the decision support tool based on the SSM-D is useful and feasible. The method to develop the SSM-D as a decision support tool could be applied to decision-making processes in other systems and services where the SSM-D has been implemented, to further increase the utility of the instrument.
On canonical cylinder sections for accurate determination of contact angle in microgravity
NASA Technical Reports Server (NTRS)
Concus, Paul; Finn, Robert; Zabihi, Farhad
1992-01-01
Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.
NASA Astrophysics Data System (ADS)
Vincenti, Henri; Vay, Jean-Luc
2018-07-01
The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.
Kim, Sung-Phil; Simeral, John D; Hochberg, Leigh R; Donoghue, John P; Black, Michael J
2010-01-01
Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. PMID:19015583
An, Zhe; Rey, Daniel; Ye, Jingxin; ...
2017-01-16
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Zhe; Rey, Daniel; Ye, Jingxin
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less
Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Zhang, Kam Y. J.
2012-05-01
SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Application of Probabilistic Analysis to Aircraft Impact Dynamics
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.
Electrical resistivity measurements on fragile organic single crystals in the diamond anvil cell
NASA Astrophysics Data System (ADS)
Adachi, T.; Tanaka, H.; Kobayashi, H.; Miyazaki, T.
2001-05-01
A method of sample assembly for four-probe resistivity measurements on fragile organic single crystals using a diamond anvil cell is presented. A procedure to keep insulation between the metal gasket and four leads of thin gold wires bonded to the sample crystal by gold paint is described in detail. The resistivity measurements performed on a single crystal of an organic semiconductor and that of neutral molecules up to 15 GPa and down to 4.2 K showed that this new procedure of four-probe diamond anvil resistivity measurements enables us to obtain sufficiently accurate resistivity data of organic crystals.
Dijkstra, Baukje; Zijlstra, Wiebren; Scherder, Erik; Kamsma, Yvo
2008-07-01
The aim of this study was to examine if walking periods and number of steps can accurately be detected by a single small body-fixed device in older adults and patients with Parkinson's disease (PD). Results of an accelerometry-based method (DynaPort MicroMod) and a pedometer (Yamax Digi-Walker SW-200) worn on each hip were evaluated against video observation. Twenty older adults and 32 PD patients walked straight-line trajectories at different speeds, of different lengths and while doing secondary tasks in an indoor hallway. Accuracy of the instruments was expressed as absolute percentage error (older adults versus PD patients). Based on the video observation, a total of 236.8 min of gait duration and 24,713 steps were assessed. The DynaPort method predominantly overestimated gait duration (10.7 versus 11.1%) and underestimated the number of steps (7.4 versus 6.9%). Accuracy decreased significantly as walking distance decreased. Number of steps were also mainly underestimated by the pedometers, the left Yamax (6.8 versus 11.1%) being more accurate than the right Yamax (11.1 versus 16.3%). Step counting of both pedometers was significantly less accurate for short trajectories (3 or 5 m) and as walking pace decreased. It is concluded that the Yamax pedometer can be reliably used for this study population when walking at sufficiently high gait speeds (>1.0 m/s). The accelerometry-based method is less speed-dependent and proved to be more appropriate in the PD patients for walking trajectories of 5 m or more.
Estimating ice particle scattering properties using a modified Rayleigh-Gans approximation
NASA Astrophysics Data System (ADS)
Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Verlinde, Johannes
2014-09-01
A modification to the Rayleigh-Gans approximation is made that includes self-interactions between different parts of an ice crystal, which both improves the accuracy of the Rayleigh-Gans approximation and extends its applicability to polarization-dependent parameters. This modified Rayleigh-Gans approximation is both efficient and reasonably accurate for particles with at least one dimension much smaller than the wavelength (e.g., dendrites at millimeter or longer wavelengths) or particles with sparse structures (e.g., low-density aggregates). Relative to the Generalized Multiparticle Mie method, backscattering reflectivities at horizontal transmit and receive polarization (HH) (ZHH) computed with this modified Rayleigh-Gans approach are about 3 dB more accurate than with the traditional Rayleigh-Gans approximation. For realistic particle size distributions and pristine ice crystals the modified Rayleigh-Gans approach agrees with the Generalized Multiparticle Mie method to within 0.5 dB for ZHH whereas for the polarimetric radar observables differential reflectivity (ZDR) and specific differential phase (KDP) agreement is generally within 0.7 dB and 13%, respectively. Compared to the A-DDA code, the modified Rayleigh-Gans approximation is several to tens of times faster if scattering properties for different incident angles and particle orientations are calculated. These accuracies and computational efficiencies are sufficient to make this modified Rayleigh-Gans approach a viable alternative to the Rayleigh-Gans approximation in some applications such as millimeter to centimeter wavelength radars and to other methods that assume simpler, less accurate shapes for ice crystals. This method should not be used on materials with dielectric properties much different from ice and on compact particles much larger than the wavelength.
NASA Astrophysics Data System (ADS)
Marksteiner, Quinn R.; Treiman, Michael B.; Chen, Ching-Fong; Haynes, William B.; Reiten, M. T.; Dalmas, Dale; Pulliam, Elias
2017-06-01
A resonant cavity method is presented which can measure loss tangents and dielectric constants for materials with dielectric constant from 150 to 10 000 and above. This practical and accurate technique is demonstrated by measuring barium strontium zirconium titanate bulk ferroelectric ceramic blocks. Above the Curie temperature, in the paraelectric state, barium strontium zirconium titanate has a sufficiently low loss that a series of resonant modes are supported in the cavity. At each mode frequency, the dielectric constant and loss tangent are obtained. The results are consistent with low frequency measurements and computer simulations. A quick method of analyzing the raw data using the 2D static electromagnetic modeling code SuperFish and an estimate of uncertainties are presented.
Smeraglia, John; Silva, John-Paul; Jones, Kieran
2017-08-01
In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.
Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination.
Kitamura, Yutaka; Watanabe, Taisuke; Nakamura, Masayuki; Isobe, Kazushige; Kawabata, Hideo; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Tanaka, Takaaki; Kawase, Tomoyuki
2018-01-01
Platelet-rich fibrin (PRF) clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP) fractions were clotted with CaCl 2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix.
Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination
Kitamura, Yutaka; Watanabe, Taisuke; Nakamura, Masayuki; Isobe, Kazushige; Kawabata, Hideo; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Tanaka, Takaaki; Kawase, Tomoyuki
2018-01-01
Platelet-rich fibrin (PRF) clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP) fractions were clotted with CaCl2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix. PMID:29450197
A new method of measuring lens refractive index.
Buckley, John
2008-07-01
A new clinical method for determining the refractive index of a lens is described. By measuring lens power in air and then immersing the lens in a liquid of known refractive index (n), it is possible to calculate the refractive index of the lens material (micro) by using the formula: micro = (nK (v,1) - K(v,n))/(K (v,1) - K (v,n)) where K (v,1) is the lens power determined in air K (v,n) is the lens power determined in the immersion liquid. The only materials required are a digital lensmeter and a wet cell for holding the lens in a liquid. The theoretical basis of the method is explained and a description given of the limitations. The optimal method of measuring different types of lenses is discussed. Sources of error include the thin lens theory behind the method, the use of a wetcell and the digital lensmeter. The theoretical accuracy of the results is given as 0.02 but 0.01 is usually achieved. In all cases, measuring the front vertex powers (FVP) yields a more accurate estimate of refractive index of a lens than measuring back vertex power (BVP). The author found half the lenses measured attained values within 0.005 of the known material index. This method is usually sufficiently accurate to isolate which lens material has been used in manufacturing and permit manufacturing spectacles that mimic the appearance of an earlier pair. Some suggestions for further refinement are given.
Flow dichroism as a reliable method to measure the hydrodynamic aspect ratio of gold nanoparticles.
Reddy, Naveen Krishna; Pérez-Juste, Jorge; Pastoriza-Santos, Isabel; Lang, Peter R; Dhont, Jan K G; Liz-Marzán, Luis M; Vermant, Jan
2011-06-28
Particle shape plays an important role in controlling the optical, magnetic, and mechanical properties of nanoparticle suspensions as well as nanocomposites. However, characterizing the size, shape, and the associated polydispersity of nanoparticles is not straightforward. Electron microscopy provides an accurate measurement of the geometric properties, but sample preparation can be laborious, and to obtain statistically relevant data many particles need to be analyzed separately. Moreover, when the particles are suspended in a fluid, it is important to measure their hydrodynamic properties, as they determine aspects such as diffusion and the rheological behavior of suspensions. Methods that evaluate the dynamics of nanoparticles such as light scattering and rheo-optical methods accurately provide these hydrodynamic properties, but do necessitate a sufficient optical response. In the present work, three different methods for characterizing nonspherical gold nanoparticles are critically compared, especially taking into account the complex optical response of these particles. The different methods are evaluated in terms of their versatility to asses size, shape, and polydispersity. Among these, the rheo-optical technique is shown to be the most reliable method to obtain hydrodynamic aspect ratio and polydispersity for nonspherical gold nanoparticles for two reasons. First, the use of the evolution of the orientation angle makes effects of polydispersity less important. Second, the use of an external flow field gives a mathematically more robust relation between particle motion and aspect ratio, especially for particles with relatively small aspect ratios.
The quantification of body fluid allostasis during exercise.
Tam, Nicholas; Noakes, Timothy D
2013-12-01
The prescription of an optimal fluid intake during exercise has been a controversial subject in sports science for at least the past decade. Only recently have guidelines evolved from 'blanket' prescriptions to more individualised recommendations. Currently the American College of Sports Medicine advise that sufficient fluid should be ingested to ensure that body mass (BM) loss during exercise does not exceed >2 % of starting BM so that exercise-associated medical complications will be avoided. Historically, BM changes have been used as a surrogate for fluid loss during exercise. It would be helpful to accurately determine fluid shifts in the body in order to provide physiologically appropriate fluid intake advice. The measurement of total body water via D2O is the most accurate measure to detect changes in body fluid content; other methods, including bioelectrical impedance, are less accurate. Thus, the aim of this review is to convey the current understanding of body fluid allostasis during exercise when drinking according to the dictates of thirst (ad libitum). This review examines the basis for fluid intake prescription with the use of BM, the concepts of 'voluntary and involuntary dehydration' and the major routes by which the body gains and loses fluid during exercise.
Choi, Jaesung P.; Foley, Matthew; Zhou, Zinan; Wong, Weng-Yew; Gokoolparsadh, Naveena; Arthur, J. Simon C.; Li, Dean Y.; Zheng, Xiangjian
2016-01-01
Mutations in CCM1 (aka KRIT1), CCM2, or CCM3 (aka PDCD10) gene cause cerebral cavernous malformation in humans. Mouse models of CCM disease have been established by deleting Ccm genes in postnatal animals. These mouse models provide invaluable tools to investigate molecular mechanism and therapeutic approaches for CCM disease. However, the full value of these animal models is limited by the lack of an accurate and quantitative method to assess lesion burden and progression. In the present study we have established a refined and detailed contrast enhanced X-ray micro-CT method to measure CCM lesion burden in mouse brains. As this study utilized a voxel dimension of 9.5μm (leading to a minimum feature size of approximately 25μm), it is therefore sufficient to measure CCM lesion volume and number globally and accurately, and provide high-resolution 3-D mapping of CCM lesions in mouse brains. Using this method, we found loss of Ccm1 or Ccm2 in neonatal endothelium confers CCM lesions in the mouse hindbrain with similar total volume and number. This quantitative approach also demonstrated a rescue of CCM lesions with simultaneous deletion of one allele of Mekk3. This method would enhance the value of the established mouse models to study the molecular basis and potential therapies for CCM and other cerebrovascular diseases. PMID:27513872
27 CFR 41.234 - Corporate documents.
Code of Federal Regulations, 2010 CFR
2010-04-01
....231 a true copy of the corporate charter or a certificate of corporate existence or incorporation... currently complete and accurate, a written statement to that effect will be sufficient for the purpose of...
27 CFR 41.234 - Corporate documents.
Code of Federal Regulations, 2011 CFR
2011-04-01
....231 a true copy of the corporate charter or a certificate of corporate existence or incorporation... currently complete and accurate, a written statement to that effect will be sufficient for the purpose of...
Relationship of physiography and snow area to stream discharge. [Kings River Watershed, California
NASA Technical Reports Server (NTRS)
Mccuen, R. H. (Principal Investigator)
1979-01-01
The author has identified the following significant results. A comparison of snowmelt runoff models shows that the accuracy of the Tangborn model and regression models is greater if the test data falls within the range of calibration than if the test data lies outside the range of calibration data. The regression models are significantly more accurate for forecasts of 60 days or more than for shorter prediction periods. The Tangborn model is more accurate for forecasts of 90 days or more than for shorter prediction periods. The Martinec model is more accurate for forecasts of one or two days than for periods of 3,5,10, or 15 days. Accuracy of the long-term models seems to be independent of forecast data. The sufficiency of the calibration data base is a function not only of the number of years of record but also of the accuracy with which the calibration years represent the total population of data years. Twelve years appears to be a sufficient length of record for each of the models considered, as long as the twelve years are representative of the population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Fast P.; Kraus, M.
2006-01-01
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that thesemore » data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.« less
Predicting speech intelligibility in noise for hearing-critical jobs
NASA Astrophysics Data System (ADS)
Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian
2003-10-01
Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.
Assessing the value of different data sets and modeling schemes for flow and transport simulations
NASA Astrophysics Data System (ADS)
Hyndman, D. W.; Dogan, M.; Van Dam, R. L.; Meerschaert, M. M.; Butler, J. J., Jr.; Benson, D. A.
2014-12-01
Accurate modeling of contaminant transport has been hampered by an inability to characterize subsurface flow and transport properties at a sufficiently high resolution. However mathematical extrapolation combined with different measurement methods can provide realistic three-dimensional fields of highly heterogeneous hydraulic conductivity (K). This study demonstrates an approach to evaluate the time, cost, and efficiency of subsurface K characterization. We quantify the value of different data sets at the highly heterogeneous Macro Dispersion Experiment (MADE) Site in Mississippi, which is a flagship test site that has been used for several macro- and small-scale tracer tests that revealed non-Gaussian tracer behavior. Tracer data collected at the site are compared to models that are based on different types and resolution of geophysical and hydrologic data. We present a cost-benefit analysis of several techniques including: 1) flowmeter K data, 2) direct-push K data, 3) ground penetrating radar, and 4) two stochastic methods to generate K fields. This research provides an initial assessment of the level of data necessary to accurately simulate solute transport with the traditional advection dispersion equation; it also provides a basis to design lower cost and more efficient remediation schemes at highly heterogeneous sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrapak, Sergey A.; Joint Institute for High Temperatures, 125412 Moscow; Chaudhuri, Manis
We put forward an approximate method to locate the fluid-solid (freezing) phase transition in systems of classical particles interacting via a wide range of Lennard-Jones-type potentials. This method is based on the constancy of the properly normalized second derivative of the interaction potential (freezing indicator) along the freezing curve. As demonstrated recently it yields remarkably good agreement with previous numerical simulation studies of the conventional 12-6 Lennard-Jones (LJ) fluid [S.A.Khrapak, M.Chaudhuri, G.E.Morfill, Phys. Rev. B 134, 052101 (2010)]. In this paper, we test this approach using a wide range of the LJ-type potentials, including LJ n-6 and exp-6 models, andmore » find that it remains sufficiently accurate and reliable in reproducing the corresponding freezing curves, down to the triple-point temperatures. One of the possible application of the method--estimation of the freezing conditions in complex (dusty) plasmas with ''tunable'' interactions--is briefly discussed.« less
Smart phone: a popular device supports amylase activity assay in fisheries research.
Thongprajukaew, Karun; Choodum, Aree; Sa-E, Barunee; Hayee, Ummah
2014-11-15
Colourimetric determinations of amylase activity were developed based on a standard dinitrosalicylic acid (DNS) staining method, using maltose as the analyte. Intensities and absorbances of red, green and blue (RGB) were obtained with iPhone imaging and Adobe Photoshop image analysis. Correlation of green and analyte concentrations was highly significant, and the accuracy of the developed method was excellent in analytical performance. The common iPhone has sufficient imaging ability for accurate quantification of maltose concentrations. Detection limits, sensitivity and linearity were comparable to a spectrophotometric method, but provided better inter-day precision. In quantifying amylase specific activity from a commercial source (P>0.02) and fish samples (P>0.05), differences compared with spectrophotometric measurements were not significant. We have demonstrated that iPhone imaging with image analysis in Adobe Photoshop has potential for field and laboratory studies of amylase. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bardin, Jonathan C.; Fins, Joseph J.; Katz, Douglas I.; Hersh, Jennifer; Heier, Linda A.; Tabelow, Karsten; Dyke, Jonathan P.; Ballon, Douglas J.; Schiff, Nicholas D.
2011-01-01
Functional neuroimaging methods hold promise for the identification of cognitive function and communication capacity in some severely brain-injured patients who may not retain sufficient motor function to demonstrate their abilities. We studied seven severely brain-injured patients and a control group of 14 subjects using a novel hierarchical functional magnetic resonance imaging assessment utilizing mental imagery responses. Whereas the control group showed consistent and accurate (for communication) blood-oxygen-level-dependent responses without exception, the brain-injured subjects showed a wide variation in the correlation of blood-oxygen-level-dependent responses and overt behavioural responses. Specifically, the brain-injured subjects dissociated bedside and functional magnetic resonance imaging-based command following and communication capabilities. These observations reveal significant challenges in developing validated functional magnetic resonance imaging-based methods for clinical use and raise interesting questions about underlying brain function assayed using these methods in brain-injured subjects. PMID:21354974
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng
2017-04-01
Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
A method for measuring total thiaminase activity in fish tissues
Zajicek, James L.; Tillitt, Donald E.; Honeyfield, Dale C.; Brown, Scott B.; Fitzsimons, John D.
2005-01-01
An accurate, quantitative, and rapid method for the measurement of thiaminase activity in fish samples is required to provide sufficient information to characterize the role of dietary thiaminase in the onset of thiamine deficiency in Great Lakes salmonines. A radiometric method that uses 14C-thiamine was optimized for substrate and co-substrate (nicotinic acid) concentrations, incubation time, and sample dilution. Total thiaminase activity was successfully determined in extracts of selected Great Lakes fishes and invertebrates. Samples included whole-body and selected tissues of forage fishes. Positive control material prepared from frozen alewives Alosa pseudoharengus collected in Lake Michigan enhanced the development and application of the method. The method allowed improved discrimination of thiaminolytic activity among forage fish species and their tissues. The temperature dependence of the thiaminase activity observed in crude extracts of Lake Michigan alewives followed a Q10 = 2 relationship for the 1-37??C temperature range, which is consistent with the bacterial-derived thiaminase I protein. ?? Copyright by the American Fisheries Society 2005.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew
2014-03-01
A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.
Nonintrusive 3D reconstruction of human bone models to simulate their bio-mechanical response
NASA Astrophysics Data System (ADS)
Alexander, Tsouknidas; Antonis, Lontos; Savvas, Savvakis; Nikolaos, Michailidis
2012-06-01
3D finite element models representing functional parts of the human skeletal system, have been repeatedly introduced over the last years, to simulate biomechanical response of anatomical characteristics or investigate surgical treatment. The reconstruction of geometrically accurate FEM models, poses a significant challenge for engineers and physicians, as recent advances in tissue engineering dictate highly customized implants, while facilitating the production of alloplast materials that are employed to restore, replace or supplement the function of human tissue. The premises of every accurate reconstruction method, is to encapture the precise geometrical characteristics of the examined tissue and thus the selection of a sufficient imaging technique is of the up-most importance. This paper reviews existing and potential applications related to the current state-of-the-art of medical imaging and simulation techniques. The procedures are examined by introducing their concepts; strengths and limitations, while the authors also present part of their recent activities in these areas. [Figure not available: see fulltext.
Evaluation of indirect impedance for measuring microbial growth in complex food matrices.
Johnson, N; Chang, Z; Bravo Almeida, C; Michel, M; Iversen, C; Callanan, M
2014-09-01
The suitability of indirect impedance to accurately measure microbial growth in real food matrices was investigated. A variety of semi-solid and liquid food products were inoculated with Bacillus cereus, Listeria monocytogenes, Staphylococcus aureus, Lactobacillus plantarum, Pseudomonas aeruginosa, Escherichia coli, Salmonella enteriditis, Candida tropicalis or Zygosaccharomyces rouxii and CO2 production was monitored using a conductimetric (Don Whitely R.A.B.I.T.) system. The majority (80%) of food and microbe combinations produced a detectable growth signal. The linearity of conductance responses in selected food products was investigated and a good correlation (R(2) ≥ 0.84) was observed between inoculum levels and times to detection. Specific growth rate estimations from the data were sufficiently accurate for predictive modeling in some cases. This initial evaluation of the suitability of indirect impedance to generate microbial growth data in complex food matrices indicates significant potential for the technology as an alternative to plating methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Image Geometric Corrections for a New EMCCD-based Dual Modular X-ray Imager
Qu, Bin; Huang, Ying; Wang, Weiyuan; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen
2012-01-01
An EMCCD-based dual modular x-ray imager was recently designed and developed from the component level, providing a high dynamic range of 53 dB and an effective pixel size of 26 μm for angiography and fluoroscopy. The unique 2×1 array design efficiently increased the clinical field of view, and also can be readily expanded to an M×N array implementation. Due to the alignment mismatches between the EMCCD sensors and the fiber optic tapers in each module, the output images or video sequences result in a misaligned 2048×1024 digital display if uncorrected. In this paper, we present a method for correcting display registration using a custom-designed two layer printed circuit board. This board was designed with grid lines to serve as the calibration pattern, and provides an accurate reference and sufficient contrast to enable proper display registration. Results show an accurate and fine stitching of the two outputs from the two modules. PMID:22254882
Fang, Wanping; Meinhardt, Lyndel W; Mischke, Sue; Bellato, Cláudia M; Motilal, Lambert; Zhang, Dapeng
2014-01-15
Cacao (Theobroma cacao L.), the source of cocoa, is an economically important tropical crop. One problem with the premium cacao market is contamination with off-types adulterating raw premium material. Accurate determination of the genetic identity of single cacao beans is essential for ensuring cocoa authentication. Using nanofluidic single nucleotide polymorphism (SNP) genotyping with 48 SNP markers, we generated SNP fingerprints for small quantities of DNA extracted from the seed coat of single cacao beans. On the basis of the SNP profiles, we identified an assumed adulterant variety, which was unambiguously distinguished from the authentic beans by multilocus matching. Assignment tests based on both Bayesian clustering analysis and allele frequency clearly separated all 30 authentic samples from the non-authentic samples. Distance-based principle coordinate analysis further supported these results. The nanofluidic SNP protocol, together with forensic statistical tools, is sufficiently robust to establish authentication and to verify gourmet cacao varieties. This method shows significant potential for practical application.
VizieR Online Data Catalog: Effective collision strengths of Si VII (Sossah+, 2014)
NASA Astrophysics Data System (ADS)
Sossah, A. M.; Tayal, S. S.
2017-08-01
The purpose of present work is to calculate more accurate data for Si VII by using highly accurate target descriptions and by including a sufficient number of target states in the close-coupling expansion. We also included fine-structure effects in the close-coupling expansions to account for the relativistic effects. We used the B-spline Breit-Pauli R-matrix (BSR) codes (Zatsarinny 2006CoPhC.174..273Z) in our scattering calculations. The present method utilizes the term-dependent non-orthogonal orbital sets for the description of the target wave functions and scattering functions. The collisional and radiative parameters have been calculated for all forbidden and allowed transitions between the lowest 92 LSJ levels of 2s22p4, 2s2p5, 2p6, 2s22p33s, 2s22p33p, 2s22p33d, and 2s2p43s configurations of Si VII. (3 data files).
J-Refocused Coherence Transfer Spectroscopic Imaging at 7 T in Human Brain
Pan, J.W.; Avdievich, N.; Hetherington, H.P.
2013-01-01
Short echo spectroscopy is commonly used to minimize signal modulation due to J-evolution of the cerebral amino acids. However, short echo acquisitions suffer from high sensitivity to macromolecules which make accurate baseline determination difficult. In this report, we describe implementation at 7 T of a double echo J-refocused coherence transfer sequence at echo time (TE) of 34 msec to minimize J-modulation of amino acids while also decreasing interfering macromolecule signals. Simulation of the pulse sequence at 7 T shows excellent resolution of glutamate, glutamine, and N-acetyl aspartate. B1 sufficiency at 7 T for the double echo acquisition is achieved using a transceiver array with radiofrequency (RF) shimming. Using an alternate RF distribution to minimize receiver phase cancellation in the transceiver, accurate phase determination for the coherence transfer is achieved with rapid single scan calibration. This method is demonstrated in spectroscopic imaging mode with n = 5 healthy volunteers resulting in metabolite values consistent with literature and in a patient with epilepsy. PMID:20648684
Fulford, Janice M.
2003-01-01
A numerical computer model, Transient Inundation Model for Rivers -- 2 Dimensional (TrimR2D), that solves the two-dimensional depth-averaged flow equations is documented and discussed. The model uses a semi-implicit, semi-Lagrangian finite-difference method. It is a variant of the Trim model and has been used successfully in estuarine environments such as San Francisco Bay. The abilities of the model are documented for three scenarios: uniform depth flows, laboratory dam-break flows, and large-scale riverine flows. The model can start computations from a ?dry? bed and converge to accurate solutions. Inflows are expressed as source terms, which limits the use of the model to sufficiently long reaches where the flow reaches equilibrium with the channel. The data sets used by the investigation demonstrate that the model accurately propagates flood waves through long river reaches and simulates dam breaks with abrupt water-surface changes.
Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls
NASA Technical Reports Server (NTRS)
Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk
1993-01-01
Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.
A hybrid model of laser energy deposition for multi-dimensional simulations of plasmas and metals
NASA Astrophysics Data System (ADS)
Basko, Mikhail M.; Tsygvintsev, Ilia P.
2017-05-01
The hybrid model of laser energy deposition is a combination of the geometrical-optics ray-tracing method with the one-dimensional (1D) solution of the Helmholtz wave equation in regions where the geometrical optics becomes inapplicable. We propose an improved version of this model, where a new physically consistent criterion for transition to the 1D wave optics is derived, and a special rescaling procedure of the wave-optics deposition profile is introduced. The model is intended for applications in large-scale two- and three-dimensional hydrodynamic codes. Comparison with exact 1D solutions demonstrates that it can fairly accurately reproduce the absorption fraction in both the s- and p-polarizations on arbitrarily steep density gradients, provided that a sufficiently accurate algorithm for gradient evaluation is used. The accuracy of the model becomes questionable for long laser pulses simulated on too fine grids, where the hydrodynamic self-focusing instability strongly manifests itself.
Influence of cross section variations on the structural behaviour of composite rotor blades
NASA Astrophysics Data System (ADS)
Rapp, Helmut; Woerndle, Rudolf
1991-09-01
A highly sophisticated structural analysis is required for helicopter rotor blades with nonhomogeneous cross sections made from nonisotropic material. Combinations of suitable analytical techniques with FEM-based techniques permit a cost effective and sufficiently accurate analysis of these complicated structures. It is determined that in general the 1D engineering theory of bending combined with 2D theories for determining the cross section properties is sufficient to describe the structural blade behavior.
Ab initio structure determination from prion nanocrystals at atomic resolution by MicroED
Sawaya, Michael R.; Rodriguez, Jose; Cascio, Duilio; ...
2016-09-19
Electrons, because of their strong interaction with matter, produce high-resolution diffraction patterns from tiny 3D crystals only a few hundred nanometers thick in a frozen-hydrated state. This discovery offers the prospect of facile structure determination of complex biological macromolecules, which cannot be coaxed to form crystals large enough for conventional crystallography or cannot easily be produced in sufficient quantities. Two potential obstacles stand in the way. The first is a phenomenon known as dynamical scattering, in which multiple scattering events scramble the recorded electron diffraction intensities so that they are no longer informative of the crystallized molecule. The second obstaclemore » is the lack of a proven means of de novo phase determination, as is required if the molecule crystallized is insufficiently similar to one that has been previously determined.We showwith four structures of the amyloid core of the Sup35 prion protein that, if the diffraction resolution is high enough, sufficiently accurate phases can be obtained by direct methods with the cryo-EM method microelectron diffraction (MicroED), just as in X-ray diffraction. The success of these four experiments dispels the concern that dynamical scattering is an obstacle to ab initio phasing by MicroED and suggests that structures of novel macromolecules can also be determined by direct methods.« less
Ab initio structure determination from prion nanocrystals at atomic resolution by MicroED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawaya, Michael R.; Rodriguez, Jose; Cascio, Duilio
Electrons, because of their strong interaction with matter, produce high-resolution diffraction patterns from tiny 3D crystals only a few hundred nanometers thick in a frozen-hydrated state. This discovery offers the prospect of facile structure determination of complex biological macromolecules, which cannot be coaxed to form crystals large enough for conventional crystallography or cannot easily be produced in sufficient quantities. Two potential obstacles stand in the way. The first is a phenomenon known as dynamical scattering, in which multiple scattering events scramble the recorded electron diffraction intensities so that they are no longer informative of the crystallized molecule. The second obstaclemore » is the lack of a proven means of de novo phase determination, as is required if the molecule crystallized is insufficiently similar to one that has been previously determined.We showwith four structures of the amyloid core of the Sup35 prion protein that, if the diffraction resolution is high enough, sufficiently accurate phases can be obtained by direct methods with the cryo-EM method microelectron diffraction (MicroED), just as in X-ray diffraction. The success of these four experiments dispels the concern that dynamical scattering is an obstacle to ab initio phasing by MicroED and suggests that structures of novel macromolecules can also be determined by direct methods.« less
Lucano, Elena; Liberti, Micaela; Mendoza, Gonzalo G.; Lloyd, Tom; Iacono, Maria Ida; Apollonio, Francesca; Wedan, Steve; Kainz, Wolfgang; Angelone, Leonardo M.
2016-01-01
Goal This study aims at a systematic assessment of five computational models of a birdcage coil for magnetic resonance imaging (MRI) with respect to accuracy and computational cost. Methods The models were implemented using the same geometrical model and numerical algorithm, but different driving methods (i.e., coil “defeaturing”). The defeatured models were labeled as: specific (S2), generic (G32, G16), and hybrid (H16, H16fr-forced). The accuracy of the models was evaluated using the “Symmetric Mean Absolute Percentage Error” (“SMAPE”), by comparison with measurements in terms of frequency response, as well as electric (||E⃗||) and magnetic (||B⃗||) field magnitude. Results All the models computed the ||B⃗|| within 35 % of the measurements, only the S2, G32, and H16 were able to accurately model the ||E⃗|| inside the phantom with a maximum SMAPE of 16 %. Outside the phantom, only the S2 showed a SMAPE lower than 11 %. Conclusions Results showed that assessing the accuracy of ||B⃗|| based only on comparison along the central longitudinal line of the coil can be misleading. Generic or hybrid coils – when properly modeling the currents along the rings/rungs – were sufficient to accurately reproduce the fields inside a phantom while a specific model was needed to accurately model ||E⃗|| in the space between coil and phantom. Significance Computational modeling of birdcage body coils is extensively used in the evaluation of RF-induced heating during MRI. Experimental validation of numerical models is needed to determine if a model is an accurate representation of a physical coil. PMID:26685220
Theory of twisted nonuniformly heated bars
NASA Technical Reports Server (NTRS)
Shorr, B. F.
1980-01-01
Nonlineary distributed stresses in twisted nonuniformly heated bars of arbitrary cross section are calculated taking into account various elasticity parameters. The approximate theory is shown to be sufficiently general and accurate by comparison with experimental data.
TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame
NASA Astrophysics Data System (ADS)
Teck Chee, Chia; Hong, Chia Yee
1999-03-01
Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.
Atwood's Machine: Experiments in an Accelerating Frame.
ERIC Educational Resources Information Center
Chee, Chia Teck; Hong, Chia Yee
1999-01-01
Experiments in an accelerating frame are hard to perform. Illustrates how simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine. (Author/CCM)
Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks
NASA Astrophysics Data System (ADS)
Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas
2017-03-01
PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.
Summary of Data from the First AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Levy, David W.; Zickuhr, Tom; Vassberg, John; Agrawal, Shreekant; Wahls, Richard A.; Pirzadeh, Shahyar; Hemsch, Michael J.
2002-01-01
The results from the first AIAA CFD Drag Prediction Workshop are summarized. The workshop was designed specifically to assess the state-of-the-art of computational fluid dynamics methods for force and moment prediction. An impartial forum was provided to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify areas needing additional research and development. The subject of the study was the DLR-F4 wing-body configuration, which is representative of transport aircraft designed for transonic flight. Specific test cases were required so that valid comparisons could be made. Optional test cases included constant-C(sub L) drag-rise predictions typically used in airplane design by industry. Results are compared to experimental data from three wind tunnel tests. A total of 18 international participants using 14 different codes submitted data to the workshop. No particular grid type or turbulence model was more accurate, when compared to each other, or to wind tunnel data. Most of the results overpredicted C(sub Lo) and C(sub Do), but induced drag (dC(sub D)/dC(sub L)(exp 2)) agreed fairly well. Drag rise at high Mach number was underpredicted, however, especially at high C(sub L). On average, the drag data were fairly accurate, but the scatter was greater than desired. The results show that well-validated Reynolds-Averaged Navier-Stokes CFD methods are sufficiently accurate to make design decisions based on predicted drag.
Signal propagation and logic gating in networks of integrate-and-fire neurons.
Vogels, Tim P; Abbott, L F
2005-11-16
Transmission of signals within the brain is essential for cognitive function, but it is not clear how neural circuits support reliable and accurate signal propagation over a sufficiently large dynamic range. Two modes of propagation have been studied: synfire chains, in which synchronous activity travels through feedforward layers of a neuronal network, and the propagation of fluctuations in firing rate across these layers. In both cases, a sufficient amount of noise, which was added to previous models from an external source, had to be included to support stable propagation. Sparse, randomly connected networks of spiking model neurons can generate chaotic patterns of activity. We investigate whether this activity, which is a more realistic noise source, is sufficient to allow for signal transmission. We find that, for rate-coded signals but not for synfire chains, such networks support robust and accurate signal reproduction through up to six layers if appropriate adjustments are made in synaptic strengths. We investigate the factors affecting transmission and show that multiple signals can propagate simultaneously along different pathways. Using this feature, we show how different types of logic gates can arise within the architecture of the random network through the strengthening of specific synapses.
Cancer Detection Using Neural Computing Methodology
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Kohen, Hamid S.; Bearman, Gregory H.; Seligson, David B.
2001-01-01
This paper describes a novel learning methodology used to analyze bio-materials. The premise of this research is to help pathologists quickly identify anomalous cells in a cost efficient method. Skilled pathologists must methodically, efficiently and carefully analyze manually histopathologic materials for the presence, amount and degree of malignancy and/or other disease states. The prolonged attention required to accomplish this task induces fatigue that may result in a higher rate of diagnostic errors. In addition, automated image analysis systems to date lack a sufficiently intelligent means of identifying even the most general regions of interest in tissue based studies and this shortfall greatly limits their utility. An intelligent data understanding system that could quickly and accurately identify diseased tissues and/or could choose regions of interest would be expected to increase the accuracy of diagnosis and usher in truly automated tissue based image analysis.
Zhang, Z; Jewett, D L
1994-01-01
Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuoka, T.
Many studies have been devoted to investigate how the maximum stress occurring in the bolted joint could be reduced. Patterson and Kenny suggest that a modified nut with a straight bevel at the bearing surface is effective. However, they only dealt with M30, and estimations on the nut geometry had not been necessarily sufficient. In this study, an extensive finite element approach for solving general multi-body contact problem is proposed by incorporating a regularization method into stiffness matrices with singularity involved; thus, numerical analyses are executed to accurately determine the optimal shape of the modified nut for various design factors.more » A modified nut with a curved bevel is also treated, and it is concluded that the modified nuts are significantly effective for bolts with larger nominal diameter and fine pitch, and are practically useful compared to pitch modification and tapered thread methods.« less
Direct analysis of [6,6-(2)H2]glucose and [U-(13)C6]glucose dry blood spot enrichments by LC-MS/MS.
Coelho, Margarida; Mendes, Vera M; Lima, Inês S; Martins, Fátima O; Fernandes, Ana B; Macedo, M Paula; Jones, John G; Manadas, Bruno
2016-06-01
A liquid chromatography tandem mass spectrometry (LC-MS/MS) using multiple reaction monitoring (MRM) in a triple-quadrupole scan mode was developed and comprehensively validated for the determination of [6,6-(2)H2]glucose and [U-(13)C6]glucose enrichments from dried blood spots (DBS) without prior derivatization. The method is demonstrated with dried blood spots obtained from rats administered with a primed-constant infusion of [U-(13)C6]glucose and an oral glucose load enriched with [6,6-(2)H2]glucose. The sensitivity is sufficient for analysis of the equivalent to <5μL of blood and the overall method was accurate and precise for the determination of DBS isotopic enrichments. Copyright © 2016 Elsevier B.V. All rights reserved.
Sibsonian and non-Sibsonian natural neighbour interpolation of the total electron content value
NASA Astrophysics Data System (ADS)
Kotulak, Kacper; Froń, Adam; Krankowski, Andrzej; Pulido, German Olivares; Henrandez-Pajares, Manuel
2017-03-01
In radioastronomy the interferometric measurement between radiotelescopes located relatively close to each other helps removing ionospheric effects. Unfortunately, in case of networks such as LOw Frequency ARray (LOFAR), due to long baselines (currently up to 1500 km), interferometric methods fail to provide sufficiently accurate ionosphere delay corrections. Practically it means that systems such as LOFAR need external ionosphere information, coming from Global or Regional Ionospheric Maps (GIMs or RIMs, respectively). Thanks to the technology based on Global Navigation Satellite Systems (GNSS), the scientific community is provided with ionosphere sounding virtually worldwide. In this paper we compare several interpolation methods for RIMs computation based on scattered Vertical Total Electron Content measurements located on one thin ionospheric layer (Ionospheric Pierce Points—IPPs). The results of this work show that methods that take into account the topology of the data distribution (e.g., natural neighbour interpolation) perform better than those based on geometric computation only (e.g., distance-weighted methods).
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
Split Space-Marching Finite-Volume Method for Chemically Reacting Supersonic Flow
NASA Technical Reports Server (NTRS)
Rizzi, Arthur W.; Bailey, Harry E.
1976-01-01
A space-marching finite-volume method employing a nonorthogonal coordinate system and using a split differencing scheme for calculating steady supersonic flow over aerodynamic shapes is presented. It is a second-order-accurate mixed explicit-implicit procedure that solves the inviscid adiabatic and nondiffusive equations for chemically reacting flow in integral conservation-law form. The relationship between the finite-volume and differential forms of the equations is examined and the relative merits of each discussed. The method admits initial Cauchy data situated on any arbitrary surface and integrates them forward along a general curvilinear coordinate, distorting and deforming the surface as it advances. The chemical kinetics term is split from the convective terms which are themselves dimensionally split, thereby freeing the fluid operators from the restricted step size imposed by the chemical reactions and increasing the computational efficiency. The accuracy of this splitting technique is analyzed, a sufficient stability criterion is established, a representative flow computation is discussed, and some comparisons are made with another method.
Tashima, Hideaki; Takeda, Masafumi; Suzuki, Hiroyuki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2010-06-21
We have shown that the application of double random phase encoding (DRPE) to biometrics enables the use of biometrics as cipher keys for binary data encryption. However, DRPE is reported to be vulnerable to known-plaintext attacks (KPAs) using a phase recovery algorithm. In this study, we investigated the vulnerability of DRPE using fingerprints as cipher keys to the KPAs. By means of computational experiments, we estimated the encryption key and restored the fingerprint image using the estimated key. Further, we propose a method for avoiding the KPA on the DRPE that employs the phase retrieval algorithm. The proposed method makes the amplitude component of the encrypted image constant in order to prevent the amplitude component of the encrypted image from being used as a clue for phase retrieval. Computational experiments showed that the proposed method not only avoids revealing the cipher key and the fingerprint but also serves as a sufficiently accurate verification system.
A novel application of artificial neural network for wind speed estimation
NASA Astrophysics Data System (ADS)
Fang, Da; Wang, Jianzhou
2017-05-01
Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.
Ma, Xin; Guo, Jing; Sun, Xiao
2015-01-01
The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
Physical habitat in the national wadeable streams assessment
Effective environmental policy decisions require stream habitat information that is accurate, precise, and relevant. The recent National Wadeable Streams Assessment (NWSA) carried out by the U.S. EPA required physical habitat information sufficiently comprehensive to facilitate i...
Variable-pulse switching circuit accurately controls solenoid-valve actuations
NASA Technical Reports Server (NTRS)
Gillett, J. D.
1967-01-01
Solid state circuit generating adjustable square wave pulses of sufficient power operates a 28 volt dc solenoid valve at precise time intervals. This circuit is used for precise time control of fluid flow in combustion experiments.
Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2014-06-01
Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.
Measuring and accounting for the intensity of nursing care: is it worthwhile?
Finkler, Steven A
2008-05-01
In June 2007, the Robert Wood Johnson Foundation sponsored a conference titled "The Economics of Nursing: Paying for Quality Nursing Care." The second topic at the conference was "the appropriateness and feasibility of measuring and accounting for the intensity of nursing care." Drs. Welton and Sermeus presented papers on that topic. This response to those papers focuses on why the hospital industry has not always accounted for and measured nursing intensity. Then it asks, "Why do we want more accurate information about nursing resources used by different patients?" It is not sufficient to say the data regarding nursing costs are not accurate. Nor is it sufficient to say that we now can improve the accuracy of the data. To move forward in this area, we need to develop compelling evidence and arguments that indicate that nursing-cost data of greater accuracy have a benefit that will exceed the costs of that data collection.
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
Theoretical study of the hyperfine parameters of OH
NASA Technical Reports Server (NTRS)
Chong, Delano P.; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.
1991-01-01
In the present study of the hyperfine parameters of O-17H as a function of the one- and n-particle spaces, all of the parameters except oxygen's spin density, b sub F(O), are sufficiently easily tractable to allow concentration on the computational requirements for accurate determination of b sub F(O). Full configuration-interaction (FCI) calculations in six Gaussian basis sets yield unambiguous results for (1) the effect of uncontracting the O s and p basis sets; (2) that of adding diffuse s and p functions; and (3) that of adding polarization functions to O. The size-extensive modified coupled-pair functional method yields b sub F values which are in fair agreement with FCI results.
Ultrasonic and radiographic evaluation of advanced aerospace materials: Ceramic composites
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
1990-01-01
Two conventional nondestructive evaluation techniques were used to evaluate advanced ceramic composite materials. It was shown that neither ultrasonic C-scan nor radiographic imaging can individually provide sufficient data for an accurate nondestructive evaluation. Both ultrasonic C-scan and conventional radiographic imaging are required for preliminary evaluation of these complex systems. The material variations that were identified by these two techniques are porosity, delaminations, bond quality between laminae, fiber alignment, fiber registration, fiber parallelism, and processing density flaws. The degree of bonding between fiber and matrix cannot be determined by either of these methods. An alternative ultrasonic technique, angular power spectrum scanning (APSS) is recommended for quantification of this interfacial bond.
Computer-assisted detection of epileptiform focuses on SPECT images
NASA Astrophysics Data System (ADS)
Grzegorczyk, Dawid; Dunin-Wąsowicz, Dorota; Mulawka, Jan J.
2010-09-01
Epilepsy is a common nervous system disease often related to consciousness disturbances and muscular spasm which affects about 1% of the human population. Despite major technological advances done in medicine in the last years there was no sufficient progress towards overcoming it. Application of advanced statistical methods and computer image analysis offers the hope for accurate detection and later removal of an epileptiform focuses which are the cause of some types of epilepsy. The aim of this work was to create a computer system that would help to find and diagnose disorders of blood circulation in the brain This may be helpful for the diagnosis of the epileptic seizures onset in the brain.
Diffraction Correlation to Reconstruct Highly Strained Particles
NASA Astrophysics Data System (ADS)
Brown, Douglas; Harder, Ross; Clark, Jesse; Kim, J. W.; Kiefer, Boris; Fullerton, Eric; Shpyrko, Oleg; Fohtung, Edwin
2015-03-01
Through the use of coherent x-ray diffraction a three-dimensional diffraction pattern of a highly strained nano-crystal can be recorded in reciprocal space by a detector. Only the intensities are recorded, resulting in a loss of the complex phase. The recorded diffraction pattern therefore requires computational processing to reconstruct the density and complex distribution of the diffracted nano-crystal. For highly strained crystals, standard methods using HIO and ER algorithms are no longer sufficient to reconstruct the diffraction pattern. Our solution is to correlate the symmetry in reciprocal space to generate an a priori shape constraint to guide the computational reconstruction of the diffraction pattern. This approach has improved the ability to accurately reconstruct highly strained nano-crystals.
NASA Astrophysics Data System (ADS)
Wright, K. E.; Popa, K.; Pöml, P.
2018-01-01
Transmutation nuclear fuels contain weight percentage quantities of actinide elements, including Pu, Am and Np. Because of the complex spectra presented by actinide elements using electron probe microanalysis (EPMA), it is necessary to have relatively pure actinide element standards to facilitate overlap correction and accurate quantitation. Synthesis of actinide oxide standards is complicated by their multiple oxidation states, which can result in inhomogeneous standards or standards that are not stable at atmospheric conditions. Synthesis of PuP4 results in a specimen that exhibits stable oxidation-reduction chemistry and is sufficiently homogenous to serve as an EPMA standard. This approach shows promise as a method for producing viable actinide standards for microanalysis.
Peters, R J B; Oosterink, J E; Stolker, A A M; Georgakopoulos, C; Nielen, M W F
2010-04-01
A unification of doping-control screening procedures of prohibited small molecule substances--including stimulants, narcotics, steroids, beta2-agonists and diuretics--is highly urgent in order to free resources for new classes such as banned proteins. Conceptually this may be achieved by the use of a combination of one gas chromatography-time-of-flight mass spectrometry method and one liquid chromatography-time-of-flight mass spectrometry method. In this work a quantitative screening method using high-resolution liquid chromatography in combination with accurate-mass time-of-flight mass spectrometry was developed and validated for determination of glucocorticosteroids, beta2-agonists, thiazide diuretics, and narcotics and stimulants in urine. To enable the simultaneous isolation of all the compounds of interest and the necessary purification of the resulting extracts, a generic extraction and hydrolysis procedure was combined with a solid-phase extraction modified for these groups of compounds. All 56 compounds are determined using positive electrospray ionisation with the exception of the thiazide diuretics for which the best sensitivity was obtained by using negative electrospray ionisation. The results show that, with the exception of clenhexyl, procaterol, and reproterol, all compounds can be detected below the respective minimum required performance level and the results for linearity, repeatability, within-lab reproducibility, and accuracy show that the method can be used for quantitative screening. If qualitative screening is sufficient the instrumental analysis may be limited to positive ionisation, because all analytes including the thiazides can be detected at the respective minimum required levels in the positive mode. The results show that the application of accurate-mass time-of-flight mass spectrometry in combination with generic extraction and purification procedures is suitable for unification and expansion of the window of screening methods of doping laboratories. Moreover, the full-scan accurate-mass data sets obtained still allow retrospective examination for emerging doping agents, without re-analyzing the samples.
Model-based sphere localization (MBSL) in x-ray projections
NASA Astrophysics Data System (ADS)
Sawall, Stefan; Maier, Joscha; Leinweber, Carsten; Funck, Carsten; Kuntz, Jan; Kachelrieß, Marc
2017-08-01
The detection of spherical markers in x-ray projections is an important task in a variety of applications, e.g. geometric calibration and detector distortion correction. Therein, the projection of the sphere center on the detector is of particular interest as the used spherical beads are no ideal point-like objects. Only few methods have been proposed to estimate this respective position on the detector with sufficient accuracy and surrogate positions, e.g. the center of gravity, are used, impairing the results of subsequent algorithms. We propose to estimate the projection of the sphere center on the detector using a simulation-based method matching an artificial projection to the actual measurement. The proposed algorithm intrinsically corrects for all polychromatic effects included in the measurement and absent in the simulation by a polynomial which is estimated simultaneously. Furthermore, neither the acquisition geometry nor any object properties besides the fact that the object is of spherical shape need to be known to find the center of the bead. It is shown by simulations that the algorithm estimates the center projection with an error of less than 1% of the detector pixel size in case of realistic noise levels and that the method is robust to the sphere material, sphere size, and acquisition parameters. A comparison to three reference methods using simulations and measurements indicates that the proposed method is an order of magnitude more accurate compared to these algorithms. The proposed method is an accurate algorithm to estimate the center of spherical markers in CT projections in the presence of polychromatic effects and noise.
Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.
Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P
2016-07-01
Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.
Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki
2015-01-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape–location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape–location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. PMID:26277022
Okada, Toshiyuki; Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki; Sato, Yoshinobu
2015-12-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
Auto calibration of a cone-beam-CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, Daniel; Heil, Ulrich; Schulze, Ralf
2012-10-15
Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferablymore » form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to demonstrate the achievable spatial resolution of their calibration procedure. Results: Compared to the results published in the most closely related work [K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)], the simulation proved the greater accuracy of their method, as well as a lower standard deviation of roughly 1 order of magnitude. When compared to another similar approach [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004)], their results were roughly of the same order of accuracy. Their analysis revealed that the method is capable of sufficiently calibrating out-of-plane angles in cases of larger cone angles when neglecting these angles negatively affects the reconstruction. Fine details in the 3D reconstruction of the spine segment and an electronic device indicate a high geometric calibration accuracy and the capability to produce state-of-the-art reconstructions. Conclusions: The method introduced here makes no requirements on the accuracy of the test object. In contrast to many previous autocalibration methods their approach also includes out-of-plane rotations of the detector. Although assuming a perfect rotation, the method seems to be sufficiently accurate for a commercial CBCT scanner. For devices which require higher dimensional geometry models, the method could be used as a initial calibration procedure.« less
Electrohydrodynamic interactions in Quincke rotation: from pair dynamics to collective motion
NASA Astrophysics Data System (ADS)
Das, Debasish; Saintillan, David
2013-11-01
Weakly conducting dielectric particles suspended in a dielectric liquid can undergo spontaneous sustained rotation when placed in a sufficiently strong dc electric field. This phenomenon of Quincke rotation has interesting implications for the rheology of these suspensions whose effective viscosity can be reduced by application of an external field. While previous models based on the rotation of isolated particles have provided accurate estimates for this viscosity reduction in dilute suspensions discrepancies have been reported in more concentrated systems where particle-particle interactions are likely significant. Motivated by this observation we extend the classic description of Quincke rotation based on the Taylor-Melcher leaky dielectric model to account for pair electrohydrodynamic interactions between identical spheres using method of reflections. We also consider the case of spherical particles undergoing Quincke rotation next to a planar electrode, where hydrodynamic interactions with the no-slip boundary lead to a self-propelled velocity. The interactions between such Quincke rollers are analyzed, and a transition to collective motion is predicted in sufficiently dense collections of many rollers, in agreement with recent experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torquato, S.; Kim, I.C.; Cule, D.
1999-02-01
We generalize the Brownian motion simulation method of Kim and Torquato [J. Appl. Phys. {bold 68}, 3892 (1990)] to compute the effective conductivity, dielectric constant and diffusion coefficient of digitized composite media. This is accomplished by first generalizing the {ital first-passage-time equations} to treat first-passage regions of arbitrary shape. We then develop the appropriate first-passage-time equations for digitized media: first-passage squares in two dimensions and first-passage cubes in three dimensions. A severe test case to prove the accuracy of the method is the two-phase periodic checkerboard in which conduction, for sufficiently large phase contrasts, is dominated by corners that joinmore » two conducting-phase pixels. Conventional numerical techniques (such as finite differences or elements) do not accurately capture the local fields here for reasonable grid resolution and hence lead to inaccurate estimates of the effective conductivity. By contrast, we show that our algorithm yields accurate estimates of the effective conductivity of the periodic checkerboard for widely different phase conductivities. Finally, we illustrate our method by computing the effective conductivity of the random checkerboard for a wide range of volume fractions and several phase contrast ratios. These results always lie within rigorous four-point bounds on the effective conductivity. {copyright} {ital 1999 American Institute of Physics.}« less
Accuracy of Protein Embedding Potentials: An Analysis in Terms of Electrostatic Potentials.
Olsen, Jógvan Magnus Haugaard; List, Nanna Holmgaard; Kristensen, Kasper; Kongsted, Jacob
2015-04-14
Quantum-mechanical embedding methods have in recent years gained significant interest and may now be applied to predict a wide range of molecular properties calculated at different levels of theory. To reach a high level of accuracy in embedding methods, both the electronic structure model of the active region and the embedding potential need to be of sufficiently high quality. In fact, failures in quantum mechanics/molecular mechanics (QM/MM)-based embedding methods have often been associated with the QM/MM methodology itself; however, in many cases the reason for such failures is due to the use of an inaccurate embedding potential. In this paper, we investigate in detail the quality of the electronic component of embedding potentials designed for calculations on protein biostructures. We show that very accurate explicitly polarizable embedding potentials may be efficiently designed using fragmentation strategies combined with single-fragment ab initio calculations. In fact, due to the self-interaction error in Kohn-Sham density functional theory (KS-DFT), use of large full-structure quantum-mechanical calculations based on conventional (hybrid) functionals leads to less accurate embedding potentials than fragment-based approaches. We also find that standard protein force fields yield poor embedding potentials, and it is therefore not advisable to use such force fields in general QM/MM-type calculations of molecular properties other than energies and structures.
Route Repetition and Route Reversal: Effects of Age and Encoding Method
Allison, Samantha; Head, Denise
2017-01-01
Previous research indicates age-related impairments in learning routes from a start location to a target destination. There is less research on age effects on the ability to reverse a learned path. The method used to learn routes may also influence performance. This study examined how encoding methods influence the ability of younger and older adults to recreate a route in a virtual reality environment in forward and reverse directions. Younger (n=50) and older (n=50) adults learned a route by either self-navigation through the virtual environment or through studying a map. At test, participants recreated the route in the forward and reverse directions. Older adults in the map study condition had greater difficulty learning the route in the forward direction compared to younger adults. Older adults who learned the route by self-navigation were less accurate in traversing the route in the reverse compared to forward direction after a delay. In contrast, for older adults who learned via map study there were no significant differences between forward and reverse directions. Results suggest that older adults may not as readily develop and retain a sufficiently flexible representation of the environment during self-navigation to support accurate route reversal. Thus, initially learning a route from a map may be more difficult for older adults, but may ultimately be beneficial in terms of better supporting the ability to return to a start location. PMID:28504535
A novel method for measuring polymer-water partition coefficients.
Zhu, Tengyi; Jafvert, Chad T; Fu, Dafang; Hu, Yue
2015-11-01
Low density polyethylene (LDPE) often is used as the sorbent material in passive sampling devices to estimate the average temporal chemical concentration in water bodies or sediment pore water. To calculate water phase chemical concentrations from LDPE concentrations accurately, it is necessary to know the LDPE-water partition coefficients (KPE-w) of the chemicals of interest. However, even moderately hydrophobic chemicals have large KPE-w values, making direct measurement experimentally difficult. In this study we evaluated a simple three phase system from which KPE-w can be determined easily and accurately. In the method, chemical equilibrium distribution between LDPE and a surfactant micelle pseudo-phase is measured, with the ratio of these concentrations equal to the LDPE-micelle partition coefficient (KPE-mic). By employing sufficient mass of polymer and surfactant (Brij 30), the mass of chemical in the water phase remains negligible, albeit in equilibrium. In parallel, the micelle-water partition coefficient (Kmic-w) is determined experimentally. KPE-w is the product of KPE-mic and Kmic-w. The method was applied to measure values of KPE-w for 17 polycyclic aromatic hydrocarbons, 37 polychlorinated biphenyls, and 9 polybrominated diphenylethers. These values were compared to literature values. Mass fraction-based chemical activity coefficients (γ) were determined in each phase and showed that for each chemical, the micelles and LDPE had nearly identical affinity. Copyright © 2014 Elsevier Ltd. All rights reserved.
Advanced Computational Aeroacoustics Methods for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Edmane (Technical Monitor); Tam, Christopher
2003-01-01
Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.
Bird, Luke; Tullis, Iain D. C.; Newman, Robert G.; Corroyer-Dulmont, Aurelien; Falzone, Nadia; Azad, Abul; Vallis, Katherine A.; Sansom, Owen J.; Muschel, Ruth J.; Vojnovic, Borivoj; Hill, Mark A.; Fokas, Emmanouil; Smart, Sean C.
2017-01-01
Introduction Preclinical CT-guided radiotherapy platforms are increasingly used but the CT images are characterized by poor soft tissue contrast. The aim of this study was to develop a robust and accurate method of MRI-guided radiotherapy (MR-IGRT) delivery to abdominal targets in the mouse. Methods A multimodality cradle was developed for providing subject immobilisation and its performance was evaluated. Whilst CT was still used for dose calculations, target identification was based on MRI. Each step of the radiotherapy planning procedure was validated initially in vitro using BANG gel dosimeters. Subsequently, MR-IGRT of normal adrenal glands with a size-matched collimated beam was performed. Additionally, the SK-N-SH neuroblastoma xenograft model and the transgenic KPC model of pancreatic ductal adenocarcinoma were used to demonstrate the applicability of our methods for the accurate delivery of radiation to CT-invisible abdominal tumours. Results The BANG gel phantoms demonstrated a targeting efficiency error of 0.56 ± 0.18 mm. The in vivo stability tests of body motion during MR-IGRT and the associated cradle transfer showed that the residual body movements are within this MR-IGRT targeting error. Accurate MR-IGRT of the normal adrenal glands with a size-matched collimated beam was confirmed by γH2AX staining. Regression in tumour volume was observed almost immediately post MR-IGRT in the neuroblastoma model, further demonstrating accuracy of x-ray delivery. Finally, MR-IGRT in the KPC model facilitated precise contouring and comparison of different treatment plans and radiotherapy dose distributions not only to the intra-abdominal tumour but also to the organs at risk. Conclusion This is, to our knowledge, the first study to demonstrate preclinical MR-IGRT in intra-abdominal organs. The proposed MR-IGRT method presents a state-of-the-art solution to enabling robust, accurate and efficient targeting of extracranial organs in the mouse and can operate with a sufficiently high throughput to allow fractionated treatments to be given. PMID:28453537
GAS CHROMATOGRAPHIC TECHNIQUES FOR THE MEASUREMENT OF ISOPRENE IN AIR
The chapter discusses gas chromatographic techniques for measuring isoprene in air. Such measurement basically consists of three parts: (1) collection of sufficient sample volume for representative and accurate quantitation, (2) separation (if necessary) of isoprene from interfer...
Validated method for quantification of genetically modified organisms in samples of maize flour.
Kunert, Renate; Gach, Johannes S; Vorauer-Uhl, Karola; Engel, Edwin; Katinger, Hermann
2006-02-08
Sensitive and accurate testing for trace amounts of biotechnology-derived DNA from plant material is the prerequisite for detection of 1% or 0.5% genetically modified ingredients in food products or raw materials thereof. Compared to ELISA detection of expressed proteins, real-time PCR (RT-PCR) amplification has easier sample preparation and detection limits are lower. Of the different methods of DNA preparation CTAB method with high flexibility in starting material and generation of sufficient DNA with relevant quality was chosen. Previous RT-PCR data generated with the SYBR green detection method showed that the method is highly sensitive to sample matrices and genomic DNA content influencing the interpretation of results. Therefore, this paper describes a real-time DNA quantification based on the TaqMan probe method, indicating high accuracy and sensitivity with detection limits of lower than 18 copies per sample applicable and comparable to highly purified plasmid standards as well as complex matrices of genomic DNA samples. The results were evaluated with ValiData for homology of variance, linearity, accuracy of the standard curve, and standard deviation.
Turnlund, Judith R; Keyes, William R
2002-09-01
Stable isotopes are used with increasing frequency to trace the metabolic fate of minerals in human nutrition studies. The precision of the analytical methods used must be sufficient to permit reliable measurement of low enrichments and the accuracy should permit comparisons between studies. Two methods most frequently used today are thermal ionization mass spectrometry (TIMS) and inductively coupled plasma mass spectrometry (ICP-MS). This study was conducted to compare the two methods. Multiple natural samples of copper, zinc, molybdenum, and magnesium were analyzed by both methods to compare their internal and external precision. Samples with a range of isotopic enrichments that were collected from human studies or prepared from standards were analyzed to compare their accuracy. TIMS was more precise and accurate than ICP-MS. However, the cost, ease, and speed of analysis were better for ICP-MS. Therefore, for most purposes, ICP-MS is the method of choice, but when the highest degrees of precision and accuracy are required and when enrichments are very low, TIMS is the method of choice.
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
Effective Coulomb force modeling for spacecraft in Earth orbit plasmas
NASA Astrophysics Data System (ADS)
Seubert, Carl R.; Stiles, Laura A.; Schaub, Hanspeter
2014-07-01
Coulomb formation flight is a concept that utilizes electrostatic forces to control the separations of close proximity spacecraft. The Coulomb force between charged bodies is a product of their size, separation, potential and interaction with the local plasma environment. A fast and accurate analytic method of capturing the interaction of a charged body in a plasma is shown. The Debye-Hückel analytic model of the electrostatic field about a charged sphere in a plasma is expanded to analytically compute the forces. This model is fitted to numerical simulations with representative geosynchronous and low Earth orbit (GEO and LEO) plasma environments using an effective Debye length. This effective Debye length, which more accurately captures the charge partial shielding, can be up to 7 times larger at GEO, and as great as 100 times larger at LEO. The force between a sphere and point charge is accurately captured with the effective Debye length, as opposed to the electron Debye length solutions that have errors exceeding 50%. One notable finding is that the effective Debye lengths in LEO plasmas about a charged body are increased from centimeters to meters. This is a promising outcome, as the reduced shielding at increased potentials provides sufficient force levels for operating the electrostatically inflated membrane structures concept at these dense plasma altitudes.
Gu, Changzhan; Li, Ruijiang; Zhang, Hualiang; Fung, Albert Y C; Torres, Carlos; Jiang, Steve B; Li, Changzhi
2012-11-01
Accurate respiration measurement is crucial in motion-adaptive cancer radiotherapy. Conventional methods for respiration measurement are undesirable because they are either invasive to the patient or do not have sufficient accuracy. In addition, measurement of external respiration signal based on conventional approaches requires close patient contact to the physical device which often causes patient discomfort and undesirable motion during radiation dose delivery. In this paper, a dc-coupled continuous-wave radar sensor was presented to provide a noncontact and noninvasive approach for respiration measurement. The radar sensor was designed with dc-coupled adaptive tuning architectures that include RF coarse-tuning and baseband fine-tuning, which allows the radar sensor to precisely measure movement with stationary moment and always work with the maximum dynamic range. The accuracy of respiration measurement with the proposed radar sensor was experimentally evaluated using a physical phantom, human subject, and moving plate in a radiotherapy environment. It was shown that respiration measurement with radar sensor while the radiation beam is on is feasible and the measurement has a submillimeter accuracy when compared with a commercial respiration monitoring system which requires patient contact. The proposed radar sensor provides accurate, noninvasive, and noncontact respiration measurement and therefore has a great potential in motion-adaptive radiotherapy.
Schroen, Anneke T; Petroni, Gina R; Wang, Hongkun; Gray, Robert; Wang, Xiaofei F; Cronin, Walter; Sargent, Daniel J; Benedetti, Jacqueline; Wickerham, Donald L; Djulbegovic, Benjamin; Slingluff, Craig L
2010-08-01
A major challenge for randomized phase III oncology trials is the frequent low rates of patient enrollment, resulting in high rates of premature closure due to insufficient accrual. We conducted a pilot study to determine the extent of trial closure due to poor accrual, feasibility of identifying trial factors associated with sufficient accrual, impact of redesign strategies on trial accrual, and accrual benchmarks designating high failure risk in the clinical trials cooperative group (CTCG) setting. A subset of phase III trials opened by five CTCGs between August 1991 and March 2004 was evaluated. Design elements, experimental agents, redesign strategies, and pretrial accrual assessment supporting accrual predictions were abstracted from CTCG documents. Percent actual/predicted accrual rate averaged per month was calculated. Trials were categorized as having sufficient or insufficient accrual based on reason for trial termination. Analyses included univariate and bivariate summaries to identify potential trial factors associated with accrual sufficiency. Among 40 trials from one CTCG, 21 (52.5%) trials closed due to insufficient accrual. In 82 trials from five CTCGs, therapeutic trials accrued sufficiently more often than nontherapeutic trials (59% vs 27%, p = 0.05). Trials including pretrial accrual assessment more often achieved sufficient accrual than those without (67% vs 47%, p = 0.08). Fewer exclusion criteria, shorter consent forms, other CTCG participation, and trial design simplicity were not associated with achieving sufficient accrual. Trials accruing at a rate much lower than predicted (<35% actual/predicted accrual rate) were consistently closed due to insufficient accrual. This trial subset under-represents certain experimental modalities. Data sources do not allow accounting for all factors potentially related to accrual success. Trial closure due to insufficient accrual is common. Certain trial design factors appear associated with attaining sufficient accrual. Defining accrual benchmarks for early trial termination or redesign is feasible, but better accrual prediction methods are critically needed. Future studies should focus on identifying trial factors that allow more accurate accrual predictions and strategies that can salvage open trials experiencing slow accrual.
Correction for human head motion in helical x-ray CT
NASA Astrophysics Data System (ADS)
Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.
2016-02-01
Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.
Report: Plans to Migrate Data to the New EPA Acquisition System Need Improvement
Report #10-P-0071, February 24, 2010. EPA’s plans for migrating data from ICMS to EAS lack sufficient incorporation of data integrity and quality checks to ensure the complete and accurate transfer of procurement data.
NASA Astrophysics Data System (ADS)
Weersink, Robert A.; Chaudhary, Sahil; Mayo, Kenwrick; He, Jie; Wilson, Brian C.
2017-04-01
We develop and demonstrate a simple shape-based approach for diffuse optical tomographic reconstruction of coagulative lesions generated during interstitial photothermal therapy (PTT) of the prostate. The shape-based reconstruction assumes a simple ellipsoid shape, matching the general dimensions of a cylindrical diffusing fiber used for light delivery in current clinical studies of PTT in focal prostate cancer. The specific requirement is to accurately define the border between the photothermal lesion and native tissue as the photothermal lesion grows, with an accuracy of ≤1 mm, so treatment can be terminated before there is damage to the rectal wall. To demonstrate the feasibility of the shape-based diffuse optical tomography reconstruction, simulated data were generated based on forward calculations in known geometries that include the prostate, rectum, and lesions of varying dimensions. The only source of optical contrast between the lesion and prostate was increased scattering in the lesion, as is typically observed with coagulation. With noise added to these forward calculations, lesion dimensions were reconstructed using the shape-based method. This approach for reconstruction is shown to be feasible and sufficiently accurate for lesions that are within 4 mm from the rectal wall. The method was also robust for irregularly shaped lesions.
Performance of a Heating Block System Designed for Studying the Heat Resistance of Bacteria in Foods
NASA Astrophysics Data System (ADS)
Kou, Xiao-Xi; Li, Rui; Hou, Li-Xia; Huang, Zhi; Ling, Bo; Wang, Shao-Jin
2016-07-01
Knowledge of bacteria’s heat resistance is essential for developing effective thermal treatments. Choosing an appropriate test method is important to accurately determine bacteria’s heat resistances. Although being a major factor to influence the thermo-tolerance of bacteria, the heating rate in samples cannot be controlled in water or oil bath methods due to main dependence on sample’s thermal properties. A heating block system (HBS) was designed to regulate the heating rates in liquid, semi-solid and solid foods using a temperature controller. Distilled water, apple juice, mashed potato, almond powder and beef were selected to evaluate the HBS’s performance by experiment and computer simulation. The results showed that the heating rates of 1, 5 and 10 °C/min with final set-point temperatures and holding times could be easily and precisely achieved in five selected food materials. A good agreement in sample central temperature profiles was obtained under various heating rates between experiment and simulation. The experimental and simulated results showed that the HBS could provide a sufficiently uniform heating environment in food samples. The effect of heating rate on bacterial thermal resistance was evaluated with the HBS. The system may hold potential applications for rapid and accurate assessments of bacteria’s thermo-tolerances.
The Linear Interaction Energy Method for the Prediction of Protein Stability Changes Upon Mutation
Wickstrom, Lauren; Gallicchio, Emilio; Levy, Ronald M.
2011-01-01
The coupling of protein energetics and sequence changes is a critical aspect of computational protein design, as well as for the understanding of protein evolution, human disease, and drug resistance. In order to study the molecular basis for this coupling, computational tools must be sufficiently accurate and computationally inexpensive enough to handle large amounts of sequence data. We have developed a computational approach based on the linear interaction energy (LIE) approximation to predict the changes in the free energy of the native state induced by a single mutation. This approach was applied to a set of 822 mutations in 10 proteins which resulted in an average unsigned error of 0.82 kcal/mol and a correlation coefficient of 0.72 between the calculated and experimental ΔΔG values. The method is able to accurately identify destabilizing hot spot mutations however it has difficulty in distinguishing between stabilizing and destabilizing mutations due to the distribution of stability changes for the set of mutations used to parameterize the model. In addition, the model also performs quite well in initial tests on a small set of double mutations. Based on these promising results, we can begin to examine the relationship between protein stability and fitness, correlated mutations, and drug resistance. PMID:22038697
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
NASA Technical Reports Server (NTRS)
Park, Sang C.; Carnahan, Timothy M.; Cohen, Lester M.; Congedo, Cherie B.; Eisenhower, Michael J.; Ousley, Wes; Weaver, Andrew; Yang, Kan
2017-01-01
The JWST Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope currently being manufactured and assembled, and is scheduled for launch in 2018. The JWST OTE, including the 18 segment primary mirror, secondary mirror, and the Aft Optics Subsystem (AOS) are designed to be passively cooled and operate near 45K. These optical elements are supported by a complex composite backplane structure. As a part of the structural distortion model validation efforts, a series of tests are planned during the cryogenic vacuum test of the fully integrated flight hardware at NASA JSC Chamber A. The successful ends to the thermal-distortion phases are heavily dependent on the accurate temperature knowledge of the OTE structural members. However, the current temperature sensor allocations during the cryo-vac test may not have sufficient fidelity to provide accurate knowledge of the temperature distributions within the composite structure. A method based on an inverse distance relationship among the sensors and thermal model nodes was developed to improve the thermal data provided for the nanometer scale WaveFront Error (WFE) predictions. The Linear Distance Weighted Interpolation (LDWI) method was developed to augment the thermal model predictions based on the sparse sensor information. This paper will encompass the development of the LDWI method using the test data from the earlier pathfinder cryo-vac tests, and the results of the notional and as tested WFE predictions from the structural finite element model cases to characterize the accuracies of this LDWI method.
Electricity Markets, Smart Grids and Smart Buildings
NASA Astrophysics Data System (ADS)
Falcey, Jonathan M.
A smart grid is an electricity network that accommodates two-way power flows, and utilizes two-way communications and increased measurement, in order to provide more information to customers and aid in the development of a more efficient electricity market. The current electrical network is outdated and has many shortcomings relating to power flows, inefficient electricity markets, generation/supply balance, a lack of information for the consumer and insufficient consumer interaction with electricity markets. Many of these challenges can be addressed with a smart grid, but there remain significant barriers to the implementation of a smart grid. This paper proposes a novel method for the development of a smart grid utilizing a bottom up approach (starting with smart buildings/campuses) with the goal of providing the framework and infrastructure necessary for a smart grid instead of the more traditional approach (installing many smart meters and hoping a smart grid emerges). This novel approach involves combining deterministic and statistical methods in order to accurately estimate building electricity use down to the device level. It provides model users with a cheaper alternative to energy audits and extensive sensor networks (the current methods of quantifying electrical use at this level) which increases their ability to modify energy consumption and respond to price signals The results of this method are promising, but they are still preliminary. As a result, there is still room for improvement. On days when there were no missing or inaccurate data, this approach has R2 of about 0.84, sometimes as high as 0.94 when compared to measured results. However, there were many days where missing data brought overall accuracy down significantly. In addition, the development and implementation of the calibration process is still underway and some functional additions must be made in order to maximize accuracy. The calibration process must be completed before a reliable accuracy can be determined. While this work shows that a combination of a deterministic and statistical methods can accurately forecast building energy usage, the ability to produce accurate results is heavily dependent upon software availability, accurate data and the proper calibration of the model. Creating the software required for a smart building model is time consuming and expensive. Bad or missing data have significant negative impacts on the accuracy of the results and can be caused by a hodgepodge of equipment and communication protocols. Proper calibration of the model is essential to ensure that the device level estimations are sufficiently accurate. Any building model which is to be successful at creating a smart building must be able to overcome these challenges.
NASA Astrophysics Data System (ADS)
Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot
2016-01-01
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate efficient scaling up to 1024, 4096 and 8192 compute cores which allowed the simulation of a single heart beat in 44.3, 87.8 and 235.3 minutes, respectively. The efficiency of the method allows fast simulation cycles without compromising anatomical or biophysical detail.
Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot
2016-01-01
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate efficient scaling up to 1024, 4096 and 8192 compute cores which allowed the simulation of a single heart beat in 44.3, 87.8 and 235.3 minutes, respectively. The efficiency of the method allows fast simulation cycles without compromising anatomical or biophysical detail. PMID:26819483
Li, Bin; Sang, Jizhang; Zhang, Zhongping
2016-01-01
A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958
NASA Astrophysics Data System (ADS)
Dong, Fang
1999-09-01
The research described in this dissertation is related to characterization of tissue microstructure using a system- independent spatial autocorrelation function (SAF). The function was determined using a reference phantom method, which employed a well-defined ``point- scatterer'' reference phantom to account for instrumental factors. The SAF's were estimated for several tissue-mimicking (TM) phantoms and fresh dog livers. Both phantom tests and in vitro dog liver measurements showed that the reference phantom method is relatively simple and fairly accurate, providing the bandwidth of the measurement system is sufficient for the size of the scatterer being involved in the scattering process. Implementation of this method in clinical scanner requires that distortions from patient's body wall be properly accounted for. The SAF's were estimated for two phantoms with body-wall-like distortions. The experimental results demonstrated that body wall distortions have little effect if echo data are acquired from a large scattering volume. One interesting application of the SAF is to form a ``scatterer size image''. The scatterer size image may help providing diagnostic tools for those diseases in which the tissue microstructure is different from the normal. Another method, the BSC method, utilizes information contained in the frequency dependence of the backscatter coefficient to estimate the scatterer size. The SAF technique produced accurate scatterer size images of homogeneous TM phantoms and the BSC method was capable of generating accurate size images for heterogeneous phantoms. In the scatterer size image of dog kidneys, the contrast-to-noise-ratio (CNR) between renal cortex and medulla was improved dramatically compared to the gray- scale image. The effect of nonlinear propagation was investigated by using a custom-designed phantom with overlaying TM fat layer. The results showed that the correlation length decreased when the transmitting power increased. The measurement results support the assumption that nonlinear propagation generates harmonic energies and causes underestimation of scatterer diameters. Nonlinear propagation can be further enhanced by those materials with high B/A value-a parameter which characterizes the degree of nonlinearity. Nine versions of TM fat and non-fat materials were measured for their B/A values using a new measurement technique, the ``simplified finite amplitude insertion substitution'' (SFAIS) method.
A Novel Approach to Rotorcraft Damage Tolerance
NASA Technical Reports Server (NTRS)
Forth, Scott C.; Everett, Richard A.; Newman, John A.
2002-01-01
Damage-tolerance methodology is positioned to replace safe-life methodologies for designing rotorcraft structures. The argument for implementing a damage-tolerance method comes from the fundamental fact that rotorcraft structures typically fail by fatigue cracking. Therefore, if technology permits prediction of fatigue-crack growth in structures, a damage-tolerance method should deliver the most accurate prediction of component life. Implementing damage-tolerance (DT) into high-cycle-fatigue (HCF) components will require a shift from traditional DT methods that rely on detecting an initial flaw with nondestructive inspection (NDI) methods. The rapid accumulation of cycles in a HCF component will result in a design based on a traditional DT method that is either impractical because of frequent inspections, or because the design will be too heavy to operate efficiently. Furthermore, once a HCF component develops a detectable propagating crack, the remaining fatigue life is short, sometimes less than one flight hour, which does not leave sufficient time for inspection. Therefore, designing a HCF component will require basing the life analysis on an initial flaw that is undetectable with current NDI technology.
NASA Astrophysics Data System (ADS)
Jiang, J.; Gu, F.; Gennish, R.; Moore, D. J.; Harris, G.; Ball, A. D.
2008-08-01
Acoustic methods are among the most useful techniques for monitoring the condition of machines. However, the influence of background noise is a major issue in implementing this method. This paper introduces an effective monitoring approach to diesel engine combustion based on acoustic one-port source theory and exhaust acoustic measurements. It has been found that the strength, in terms of pressure, of the engine acoustic source is able to provide a more accurate representation of the engine combustion because it is obtained by minimising the reflection effects in the exhaust system. A multi-load acoustic method was then developed to determine the pressure signal when a four-cylinder diesel engine was tested with faults in the fuel injector and exhaust valve. From the experimental results, it is shown that a two-load acoustic method is sufficient to permit the detection and diagnosis of abnormalities in the pressure signal, caused by the faults. This then provides a novel and yet reliable method to achieve condition monitoring of diesel engines even if they operate in high noise environments such as standby power stations and vessel chambers.
Wianowska, Dorota; Dawidowicz, Andrzej L
2016-05-01
This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Greenwood, Jeremy R.; Calkins, David; Sullivan, Arron P.; Shelley, John C.
2010-06-01
Generating the appropriate protonation states of drug-like molecules in solution is important for success in both ligand- and structure-based virtual screening. Screening collections of millions of compounds requires a method for determining tautomers and their energies that is sufficiently rapid, accurate, and comprehensive. To maximise enrichment, the lowest energy tautomers must be determined from heterogeneous input, without over-enumerating unfavourable states. While computationally expensive, the density functional theory (DFT) method M06-2X/aug-cc-pVTZ(-f) [PB-SCRF] provides accurate energies for enumerated model tautomeric systems. The empirical Hammett-Taft methodology can very rapidly extrapolate substituent effects from model systems to drug-like molecules via the relationship between pKT and pKa. Combining the two complementary approaches transforms the tautomer problem from a scientific challenge to one of engineering scale-up, and avoids issues that arise due to the very limited number of measured pKT values, especially for the complicated heterocycles often favoured by medicinal chemists for their novelty and versatility. Several hundreds of pre-calculated tautomer energies and substituent pKa effects are tabulated in databases for use in structural adjustment by the program Epik, which treats tautomers as a subset of the larger problem of the protonation states in aqueous ensembles and their energy penalties. Accuracy and coverage is continually improved and expanded by parameterizing new systems of interest using DFT and experimental data. Recommendations are made for how to best incorporate tautomers in molecular design and virtual screening workflows.
Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-01-01
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-03-27
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.
Sub-micrometre accurate free-form optics by three-dimensional printing on single-mode fibres
Gissibl, Timo; Thiele, Simon; Herkommer, Alois; Giessen, Harald
2016-01-01
Micro-optics are widely used in numerous applications, such as beam shaping, collimation, focusing and imaging. We use femtosecond 3D printing to manufacture free-form micro-optical elements. Our method gives sub-micrometre accuracy so that direct manufacturing even on single-mode fibres is possible. We demonstrate the potential of our method by writing different collimation optics, toric lenses, free-form surfaces with polynomials of up to 10th order for intensity beam shaping, as well as chiral photonic crystals for circular polarization filtering, all aligned onto the core of the single-mode fibres. We determine the accuracy of our optics by analysing the output patterns as well as interferometrically characterizing the surfaces. We find excellent agreement with numerical calculations. 3D printing of microoptics can achieve sufficient performance that will allow for rapid prototyping and production of beam-shaping and imaging devices. PMID:27339700
Olbrant, Edgar; Frank, Martin
2010-12-01
In this paper, we study a deterministic method for particle transport in biological tissues. The method is specifically developed for dose calculations in cancer therapy and for radiological imaging. Generalized Fokker-Planck (GFP) theory [Leakeas and Larsen, Nucl. Sci. Eng. 137 (2001), pp. 236-250] has been developed to improve the Fokker-Planck (FP) equation in cases where scattering is forward-peaked and where there is a sufficient amount of large-angle scattering. We compare grid-based numerical solutions to FP and GFP in realistic medical applications. First, electron dose calculations in heterogeneous parts of the human body are performed. Therefore, accurate electron scattering cross sections are included and their incorporation into our model is extensively described. Second, we solve GFP approximations of the radiative transport equation to investigate reflectance and transmittance of light in biological tissues. All results are compared with either Monte Carlo or discrete-ordinates transport solutions.
Top-down analysis of protein samples by de novo sequencing techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vyatkina, Kira; Wu, Si; Dekker, Lennard J. M.
MOTIVATION: Recent technological advances have made high-resolution mass spectrometers affordable to many laboratories, thus boosting rapid development of top-down mass spectrometry, and implying a need in efficient methods for analyzing this kind of data. RESULTS: We describe a method for analysis of protein samples from top-down tandem mass spectrometry data, which capitalizes on de novo sequencing of fragments of the proteins present in the sample. Our algorithm takes as input a set of de novo amino acid strings derived from the given mass spectra using the recently proposed Twister approach, and combines them into aggregated strings endowed with offsets. Themore » former typically constitute accurate sequence fragments of sufficiently well-represented proteins from the sample being analyzed, while the latter indicate their location in the protein sequence, and also bear information on post-translational modifications and fragmentation patterns.« less
Sub-micrometre accurate free-form optics by three-dimensional printing on single-mode fibres.
Gissibl, Timo; Thiele, Simon; Herkommer, Alois; Giessen, Harald
2016-06-24
Micro-optics are widely used in numerous applications, such as beam shaping, collimation, focusing and imaging. We use femtosecond 3D printing to manufacture free-form micro-optical elements. Our method gives sub-micrometre accuracy so that direct manufacturing even on single-mode fibres is possible. We demonstrate the potential of our method by writing different collimation optics, toric lenses, free-form surfaces with polynomials of up to 10th order for intensity beam shaping, as well as chiral photonic crystals for circular polarization filtering, all aligned onto the core of the single-mode fibres. We determine the accuracy of our optics by analysing the output patterns as well as interferometrically characterizing the surfaces. We find excellent agreement with numerical calculations. 3D printing of microoptics can achieve sufficient performance that will allow for rapid prototyping and production of beam-shaping and imaging devices.
Evaluation of the SeedCounter, A Mobile Application for Grain Phenotyping.
Komyshev, Evgenii; Genaev, Mikhail; Afonnikov, Dmitry
2016-01-01
Grain morphometry in cereals is an important step in selecting new high-yielding plants. Manual assessment of parameters such as the number of grains per ear and grain size is laborious. One solution to this problem is image-based analysis that can be performed using a desktop PC. Furthermore, the effectiveness of analysis performed in the field can be improved through the use of mobile devices. In this paper, we propose a method for the automated evaluation of phenotypic parameters of grains using mobile devices running the Android operational system. The experimental results show that this approach is efficient and sufficiently accurate for the large-scale analysis of phenotypic characteristics in wheat grains. Evaluation of our application under six different lighting conditions and three mobile devices demonstrated that the lighting of the paper has significant influence on the accuracy of our method, unlike the smartphone type.
Sensor, method and system of monitoring transmission lines
Syracuse, Steven J.; Clark, Roy; Halverson, Peter G.; Tesche, Frederick M.; Barlow, Charles V.
2012-10-02
An apparatus, method, and system for measuring the magnetic field produced by phase conductors in multi-phase power lines. The magnetic field measurements are used to determine the current load on the conductors. The magnetic fields are sensed by coils placed sufficiently proximate the lines to measure the voltage induced in the coils by the field without touching the lines. The x and y components of the magnetic fields are used to calculate the conductor sag, and then the sag data, along with the field strength data, can be used to calculate the current load on the line and the phase of the current. The sag calculations of this invention are independent of line voltage and line current measurements. The system applies a computerized fitter routine to measured and sampled voltages on the coils to accurately determine the values of parameters associated with the overhead phase conductors.
Sub-micrometre accurate free-form optics by three-dimensional printing on single-mode fibres
NASA Astrophysics Data System (ADS)
Gissibl, Timo; Thiele, Simon; Herkommer, Alois; Giessen, Harald
2016-06-01
Micro-optics are widely used in numerous applications, such as beam shaping, collimation, focusing and imaging. We use femtosecond 3D printing to manufacture free-form micro-optical elements. Our method gives sub-micrometre accuracy so that direct manufacturing even on single-mode fibres is possible. We demonstrate the potential of our method by writing different collimation optics, toric lenses, free-form surfaces with polynomials of up to 10th order for intensity beam shaping, as well as chiral photonic crystals for circular polarization filtering, all aligned onto the core of the single-mode fibres. We determine the accuracy of our optics by analysing the output patterns as well as interferometrically characterizing the surfaces. We find excellent agreement with numerical calculations. 3D printing of microoptics can achieve sufficient performance that will allow for rapid prototyping and production of beam-shaping and imaging devices.
Low gravity synthesis of polymers with controlled molecular configuration
NASA Technical Reports Server (NTRS)
Heimbuch, A. H.; Parker, J. A.; Schindler, A.; Olf, H. G.
1975-01-01
Heterogeneous chemical systems have been studied for the synthesis of isotactic polypropylene in order to establish baseline parameters for the reaction process and to develop sensitive and accurate methods of analysis. These parameters and analytical methods may be used to make a comparison between the polypropylene obtained at one g with that of zero g (gravity). Baseline reaction parameters have been established for the slurry (liquid monomer in heptane/solid catalyst) polymerization of propylene to yield high purity, 98% isotactic polypropylene. Kinetic data for the slurry reaction showed that a sufficient quantity of polymer for complete characterization can be produced in a reaction time of 5 min; this time is compatible with that available on a sounding rocket for a zero-g simulation experiment. The preformed (activated) catalyst was found to be more reproducible in its activity than the in situ formed catalyst.
Jaki, Thomas; Allacher, Peter; Horling, Frank
2016-09-05
Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Pharmacists' knowledge and the difficulty of obtaining emergency contraception.
Bennett, Wendy; Petraitis, Carol; D'Anella, Alicia; Marcella, Stephen
2003-10-01
This cross-sectional study was performed to examine knowledge and attitudes among pharmacists about emergency contraception (EC) and determine the factors associated with their provision of EC. A random systematic sampling method was used to obtain a sample (N = 320) of pharmacies in Pennsylvania. A "mystery shopper" telephone survey method was utilized. Only 35% of pharmacists stated that they would be able to fill a prescription for EC that day. Also, many community pharmacists do not have sufficient or accurate information about EC. In a logistic regression model, pharmacists' lack of information relates to the low proportion of pharmacists able to dispense it. In conclusion, access to EC from community pharmacists in Pennsylvania is severely limited. Interventions to improve timely access to EC involve increased education for pharmacists, as well as increased community request for these products as an incentive for pharmacists to stock them.
Jaffe, Jacob D; Keshishian, Hasmik; Chang, Betty; Addona, Theresa A; Gillette, Michael A; Carr, Steven A
2008-10-01
Verification of candidate biomarker proteins in blood is typically done using multiple reaction monitoring (MRM) of peptides by LC-MS/MS on triple quadrupole MS systems. MRM assay development for each protein requires significant time and cost, much of which is likely to be of little value if the candidate biomarker is below the detection limit in blood or a false positive in the original discovery data. Here we present a new technology, accurate inclusion mass screening (AIMS), designed to provide a bridge from unbiased discovery to MS-based targeted assay development. Masses on the software inclusion list are monitored in each scan on the Orbitrap MS system, and MS/MS spectra for sequence confirmation are acquired only when a peptide from the list is detected with both the correct accurate mass and charge state. The AIMS experiment confirms that a given peptide (and thus the protein from which it is derived) is present in the plasma. Throughput of the method is sufficient to qualify up to a hundred proteins/week. The sensitivity of AIMS is similar to MRM on a triple quadrupole MS system using optimized sample preparation methods (low tens of ng/ml in plasma), and MS/MS data from the AIMS experiments on the Orbitrap can be directly used to configure MRM assays. The method was shown to be at least 4-fold more efficient at detecting peptides of interest than undirected LC-MS/MS experiments using the same instrumentation, and relative quantitation information can be obtained by AIMS in case versus control experiments. Detection by AIMS ensures that a quantitative MRM-based assay can be configured for that protein. The method has the potential to qualify large number of biomarker candidates based on their detection in plasma prior to committing to the time- and resource-intensive steps of establishing a quantitative assay.
Donnell, Deborah; Komárek, Arnošt; Omelka, Marek; Mullis, Caroline E.; Szekeres, Greg; Piwowar-Manning, Estelle; Fiamma, Agnes; Gray, Ronald H.; Lutalo, Tom; Morrison, Charles S.; Salata, Robert A.; Chipato, Tsungai; Celum, Connie; Kahle, Erin M.; Taha, Taha E.; Kumwenda, Newton I.; Karim, Quarraisha Abdool; Naranbhai, Vivek; Lingappa, Jairam R.; Sweat, Michael D.; Coates, Thomas; Eshleman, Susan H.
2013-01-01
Background Accurate methods of HIV incidence determination are critically needed to monitor the epidemic and determine the population level impact of prevention trials. One such trial, Project Accept, a Phase III, community-randomized trial, evaluated the impact of enhanced, community-based voluntary counseling and testing on population-level HIV incidence. The primary endpoint of the trial was based on a single, cross-sectional, post-intervention HIV incidence assessment. Methods and Findings Test performance of HIV incidence determination was evaluated for 403 multi-assay algorithms [MAAs] that included the BED capture immunoassay [BED-CEIA] alone, an avidity assay alone, and combinations of these assays at different cutoff values with and without CD4 and viral load testing on samples from seven African cohorts (5,325 samples from 3,436 individuals with known duration of HIV infection [1 month to >10 years]). The mean window period (average time individuals appear positive for a given algorithm) and performance in estimating an incidence estimate (in terms of bias and variance) of these MAAs were evaluated in three simulated epidemic scenarios (stable, emerging and waning). The power of different test methods to detect a 35% reduction in incidence in the matched communities of Project Accept was also assessed. A MAA was identified that included BED-CEIA, the avidity assay, CD4 cell count, and viral load that had a window period of 259 days, accurately estimated HIV incidence in all three epidemic settings and provided sufficient power to detect an intervention effect in Project Accept. Conclusions In a Southern African setting, HIV incidence estimates and intervention effects can be accurately estimated from cross-sectional surveys using a MAA. The improved accuracy in cross-sectional incidence testing that a MAA provides is a powerful tool for HIV surveillance and program evaluation. PMID:24236054
NASA Astrophysics Data System (ADS)
Gnyba, M.; Wróbel, M. S.; Karpienko, K.; Milewska, D.; Jedrzejewska-Szczerska, M.
2015-07-01
In this article the simultaneous investigation of blood parameters by complementary optical methods, Raman spectroscopy and spectral-domain low-coherence interferometry, is presented. Thus, the mutual relationship between chemical and physical properties may be investigated, because low-coherence interferometry measures optical properties of the investigated object, while Raman spectroscopy gives information about its molecular composition. A series of in-vitro measurements were carried out to assess sufficient accuracy for monitoring of blood parameters. A vast number of blood samples with various hematological parameters, collected from different donors, were measured in order to achieve a statistical significance of results and validation of the methods. Preliminary results indicate the benefits in combination of presented complementary methods and form the basis for development of a multimodal system for rapid and accurate optical determination of selected parameters in whole human blood. Future development of optical systems and multivariate calibration models are planned to extend the number of detected blood parameters and provide a robust quantitative multi-component analysis.
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
Cheon, Young Koog; Cho, Won Young; Lee, Tae Hee; Cho, Young Deok; Moon, Jong Ho; Lee, Joon Seong; Shim, Chan Sup
2009-01-01
AIM: To assess the ability of endoscopic ultrasonography (EUS) to differentiate neoplastic from non-neoplastic polypoid lesions of the gallbladder (PLGs). METHODS: The uses of EUS and transabdominal ultrasonography (US) were retrospectively analyzed in 94 surgical cases of gallbladder polyps less than 20 mm in diameter. RESULTS: The prevalence of neoplastic lesions with a diameter of 5-10 mm was 17.2% (10/58); 11-15 mm, 15.4% (4/26), and 16-20 mm, 50% (5/10). The overall diagnostic accuracies of EUS and US for small PLGs were 80.9% and 63.9% (P < 0.05), respectively. EUS correctly distinguished 12 (63.2%) of 19 neoplastic PLGs but was less accurate for polyps less than 1.0 cm (4/10, 40%) than for polyps greater than 1.0 cm (8/9, 88.9%) (P = 0.02). CONCLUSION: Although EUS was more accurate than US, its accuracy for differentiating neoplastic from non-neoplastic PLGs less than 1.0 cm was low. Thus, EUS alone is not sufficient for determining a treatment strategy for PLGs of less than 1.0 cm. PMID:19452579
Telemedicine in acute plastic surgical trauma and burns.
Jones, S. M.; Milroy, C.; Pickford, M. A.
2004-01-01
BACKGROUND: Telemedicine is a relatively new development within the UK, but is increasingly useful in many areas of medicine including plastic surgery. Plastic surgery centres often work on a hub-and-spoke basis with many district hospitals referring to one tertiary centre. The Queen Victoria Hospital is one such centre receiving calls from more than 28 hospitals in the Southeast of England resulting in approximately 20 referrals a day. OBJECTIVE: A telemedicine system was developed to improve trauma management. This study was designed to establish whether digital images were sufficiently accurate enough to aid decision-making. A store-and-forward telemedicine system was devised and the images of 150 trauma referrals evaluated in terms of injury severity and operative priority by each member of the plastic surgical team. RESULTS: Correlation scores for assessed images were high. Accuracy of "transmitted image" in comparison to injury on examination scored > 97%. Operative priority scores tended to be higher than injury severity. CONCLUSIONS: Telemedicine is an accurate method by which to transfer information on plastic surgical trauma including burns. PMID:15239862
Comparison of two methods of measuring gastric pH.
Neill, K M; Rice, K T; Ahern, H L
1993-01-01
To assess the agreement between two methods of measuring gastric pH in critically ill patients (multiple band litmus paper-tested aspirations versus a meter-read probe located in the tip of a nasogastric tube) and to compare nurse satisfaction with both methods of measuring pH. Prospective, correlational, nonprobability sample. Mid-Atlantic, semirural Veterans Affairs Medical Center. 39 male, surgical, critical care patients, who were nasogastrically intubated in the operating room and received nothing by mouth. NURSES: Twenty-seven registered nurses on the medical-surgical intensive care staff. Differences in pH units as determined by two methods of measurement and nurse satisfaction scores. Litmus paper-tested aspirations versus a meter-read probe located in the tip of the nasogastric tube, measured every 2 hours for 48 hours. A nurse satisfaction assessment form for both measurement methods at entry, 6 months, and 12 months. All measures of association, Pearson's r (0.79), the concordance coefficient (0.74), and eta (0.88), were high. The concordance coefficient measures indicated sufficient agreement between the two methods at the initial and 24 hour measurement times (Cb) = 0.97, 0.97, and 0.94), but not at 48 hours. The meter method indicated prophylaxis was needed when the paper did not, more often than did the paper method (9.3% vs 5.2%). A significant difference between methods was found only at the last reading at 48 hours (z = -2.24, p < .0249). MANOVA revealed that nurses' preference for the meter method was significant (F = 139.48, df = 1.18) and increased over time (F = 4.77, df = 2,36). The gastric probe method of measuring pH is an accurate substitution up to 48 hours for the litmus-paper aspiration method in the postoperative patient who is receiving nothing by mouth. Nurses prefer the gastric probe method of measuring pH over the litmus-paper method because they judge it to be safer, faster, and more accurate.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
Petersson, N. Anders; Sjogreen, Bjorn
2015-07-20
We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less
Niu, Xiaoping; Qi, Jianmin; Zhang, Gaoyang; Xu, Jiantang; Tao, Aifen; Fang, Pingping; Su, Jianguang
2015-01-01
To accurately measure gene expression using quantitative reverse transcription PCR (qRT-PCR), reliable reference gene(s) are required for data normalization. Corchorus capsularis, an annual herbaceous fiber crop with predominant biodegradability and renewability, has not been investigated for the stability of reference genes with qRT-PCR. In this study, 11 candidate reference genes were selected and their expression levels were assessed using qRT-PCR. To account for the influence of experimental approach and tissue type, 22 different jute samples were selected from abiotic and biotic stress conditions as well as three different tissue types. The stability of the candidate reference genes was evaluated using geNorm, NormFinder, and BestKeeper programs, and the comprehensive rankings of gene stability were generated by aggregate analysis. For the biotic stress and NaCl stress subsets, ACT7 and RAN were suitable as stable reference genes for gene expression normalization. For the PEG stress subset, UBC, and DnaJ were sufficient for accurate normalization. For the tissues subset, four reference genes TUBβ, UBI, EF1α, and RAN were sufficient for accurate normalization. The selected genes were further validated by comparing expression profiles of WRKY15 in various samples, and two stable reference genes were recommended for accurate normalization of qRT-PCR data. Our results provide researchers with appropriate reference genes for qRT-PCR in C. capsularis, and will facilitate gene expression study under these conditions. PMID:26528312
NASA Technical Reports Server (NTRS)
Mostrel, M. M.
1988-01-01
New shock-capturing finite difference approximations for solving two scalar conservation law nonlinear partial differential equations describing inviscid, isentropic, compressible flows of aerodynamics at transonic speeds are presented. A global linear stability theorem is applied to these schemes in order to derive a necessary and sufficient condition for the finite element method. A technique is proposed to render the described approximations total variation-stable by applying the flux limiters to the nonlinear terms of the difference equation dimension by dimension. An entropy theorem applying to the approximations is proved, and an implicit, forward Euler-type time discretization of the approximation is presented. Results of some numerical experiments using the approximations are reported.
Sampling in epidemiological research: issues, hazards and pitfalls.
Tyrer, Stephen; Heyman, Bob
2016-04-01
Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research.
Sampling in epidemiological research: issues, hazards and pitfalls
Tyrer, Stephen; Heyman, Bob
2016-01-01
Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
Active control and synchronization chaotic satellite via the geomagnetic Lorentz force
NASA Astrophysics Data System (ADS)
Abdel-Aziz, Yehia
2016-07-01
The use of geomagnetic Lorentz force is considered in this paper for the purpose of satellite attitude control. A satellite with an electrostatic charge will interact with the Earth's magnetic field and experience the Lorentz force. An analytical attitude control and synchronization two identical chaotic satellite systems with different initial condition Master/ Slave are proposed to allows a charged satellite remains near the desired attitude. Asymptotic stability for the closed-loop system are investigated by means of Lyapunov stability theorem. The control feasibility depend on the charge requirement. Given a significantly and sufficiently accurate insertion, a charged satellite could maintains the desired attitude orientation without propellant. Simulations is performed to prove the efficacy of the proposed method.
Non-LTE line formation in a magnetic field. I. Noncoherent scattering and true absorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domke, H.; Staude, J.
1973-08-01
The formation of a Zeeman-multiplet by noncoherent scattering and true absorption in a Milne-- Eddington atmosphere is considered assuming a homogeneous magnetic field and complete depolarization of the atomic line levels. The transfer equation for the Stokes parameters is transformed into a scalar integral equation of the Wiener-- Hopf type which is solved by Sobolev's method in closed form. The influence of the magnetic field on the mean scattering number in an infinite medium is discussed. The solution of the line formation problem is obtained for a Planckian source fruction. This solution may be simplified by making the ''finite fieldmore » approximation'', which should be sufficiently accurate for practical purposes. (auth)« less
Lopez-Rendon, Xochitl; Zhang, Guozhi; Coudyzer, Walter; Develter, Wim; Bosmans, Hilde; Zanca, Federica
2017-11-01
To compare the lung and breast dose associated with three chest protocols: standard, organ-based tube current modulation (OBTCM) and fast-speed scanning; and to estimate the error associated with organ dose when modelling the longitudinal (z-) TCM versus the 3D-TCM in Monte Carlo simulations (MC) for these three protocols. Five adult and three paediatric cadavers with different BMI were scanned. The CTDI vol of the OBTCM and the fast-speed protocols were matched to the patient-specific CTDI vol of the standard protocol. Lung and breast doses were estimated using MC with both z- and 3D-TCM simulated and compared between protocols. The fast-speed scanning protocol delivered the highest doses. A slight reduction for breast dose (up to 5.1%) was observed for two of the three female cadavers with the OBTCM in comparison to the standard. For both adult and paediatric, the implementation of the z-TCM data only for organ dose estimation resulted in 10.0% accuracy for the standard and fast-speed protocols, while relative dose differences were up to 15.3% for the OBTCM protocol. At identical CTDI vol values, the standard protocol delivered the lowest overall doses. Only for the OBTCM protocol is the 3D-TCM needed if an accurate (<10.0%) organ dosimetry is desired. • The z-TCM information is sufficient for accurate dosimetry for standard protocols. • The z-TCM information is sufficient for accurate dosimetry for fast-speed scanning protocols. • For organ-based TCM schemes, the 3D-TCM information is necessary for accurate dosimetry. • At identical CTDI vol , the fast-speed scanning protocol delivered the highest doses. • Lung dose was higher in XCare than standard protocol at identical CTDI vol .
Automatic drawing for traffic marking with MMS LIDAR intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Shimano, Y.
2014-05-01
Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.
Sarilita, Erli; Rynn, Christopher; Mossey, Peter A; Black, Sue; Oscandar, Fahmi
2018-05-01
This study investigated nose profile morphology and its relationship to the skull in Scottish subadult and Indonesian adult populations, with the aim of improving the accuracy of forensic craniofacial reconstruction. Samples of 86 lateral head cephalograms from Dundee Dental School (mean age, 11.8 years) and 335 lateral head cephalograms from the Universitas Padjadjaran Dental Hospital, Bandung, Indonesia (mean age 24.2 years), were measured. The method of nose profile estimation based on skull morphology previously proposed by Rynn and colleagues in 2010 (FSMP 6:20-34) was tested in this study. Following this method, three nasal aperture-related craniometrics and six nose profile dimensions were measured from the cephalograms. To assess the accuracy of the method, six nose profile dimensions were estimated from the three craniometric parameters using the published method and then compared to the actual nose profile dimensions.In the Scottish subadult population, no sexual dimorphism was evident in the measured dimensions. In contrast, sexual dimorphism of the Indonesian adult population was evident in all craniometric and nose profile dimensions; notably, males exhibited statistically significant larger values than females. The published method by Rynn and colleagues (FSMP 6:20-34, 2010) performed better in the Scottish subadult population (mean difference of maximum, 2.35 mm) compared to the Indonesian adult population (mean difference of maximum, 5.42 mm in males and 4.89 mm in females).In addition, regression formulae were derived to estimate nose profile dimensions based on the craniometric measurements for the Indonesian adult population. The published method is not sufficiently accurate for use on the Indonesian population, so the derived method should be used. The accuracy of the published method by Rynn and colleagues (FSMP 6:20-34, 2010) was sufficiently reliable to be applied in Scottish subadult population.
Verification of KAM Theory on Earth Orbiting Satellites
2010-03-01
9 2.2 The Two Body Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Geocentric and Geographic...Center of Earth Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Geocentric Latitude...their gravitational fields a different approach must be used. For the moment the above representation is sufficient, but a more accurate model will be
ERIC Educational Resources Information Center
McNamee, Mike
1990-01-01
Charities have an obligation to give donors "accurate and sufficient information concerning the deductibility of contributions." Donors must subtract any benefit of "substantial value" from their gifts. The value of a benefit is based on its fair market value, not on its cost to the charity. (MLW)
Remote Determination of the in situ Sensitivity of a Streckeisen STS-2 Broadband Seismometer
NASA Astrophysics Data System (ADS)
Uhrhammer, R. A.; Taira, T.; Hellweg, M.
2015-12-01
The sensitivity of a STS-2 broadband seismometer can be determined remotely by two basic methods: 1) via comparison of the inferred ground motions with a reference seismometer, and: 2) via excitation of the calibration coil with a simultaneously recorded stimulus signal. The first method is limited by the accuracy of the reference seismometer and the second method is limited by the accuracy of the motor constant (Gc) of the calibration coil. The accuracy of both methods is also influenced by the signal-to-noise ratio (SNR) in the presence of background seismic noise and the degree of orthogonality of the tri-axial suspension in the STS-2 seismometer. The Streckeisen STS-2 manual states that the signal coil sensitivity (Gs) is 1500 V/(m/s) (+/-1.5%) and it gives Gc to only one decimal place (ie, Gc = 2 g/A). Unfortunately the factory Gc value is not given with sufficient accuracy to be useful for determining the sensitivity of Gs to within 1.5%. Thus we need to determine Gc to enable accurate calibration of the STS-2 via remote excitation of the Gc with a known stimulus. The Berkeley Digital Seismic Network (BDSN) has 12 STS-2 seismometers with co-sited reference sensors (strong motion accelerometers) and they are all recorded by Q330HR data loggers with factory cabling. The procedure is to first verify the sensitivity of the STS-2 signal coils (Gs) via comparison of the ground motions recorded by the STS-2 with the ground motions recorded by the co-sited strong motion accelerometer for an earthquake with has sufficiently high SNR in a passband common to both sensors. The second step in the procedure is to remotely (from Berkeley) excite to calibration coil with a 1 Hz sinusoid which is simultaneously recorded and, using the above measured Gs values, solve for Gc of the calibration coils. The resulting Gc values are typically 2.20-2.50 g/A (accurate to 3+ decimal places) and once the Gc values are found, the STS-2 absolute sensitivity can be determined remotely to an accuracy of better than 1%. The primary advantage of using strong motion accelerometers as the reference instrument is that their absolute calibration can be checked via tilt tests if the need arises.
Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom
2013-05-01
Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.
2008-12-01
Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.
Guidance for laboratories performing molecular pathology for cancer patients
Cree, Ian A; Deans, Zandra; Ligtenberg, Marjolijn J L; Normanno, Nicola; Edsjö, Anders; Rouleau, Etienne; Solé, Francesc; Thunnissen, Erik; Timens, Wim; Schuuring, Ed; Dequeker, Elisabeth; Murray, Samuel; Dietel, Manfred; Groenen, Patricia; Van Krieken, J Han
2014-01-01
Molecular testing is becoming an important part of the diagnosis of any patient with cancer. The challenge to laboratories is to meet this need, using reliable methods and processes to ensure that patients receive a timely and accurate report on which their treatment will be based. The aim of this paper is to provide minimum requirements for the management of molecular pathology laboratories. This general guidance should be augmented by the specific guidance available for different tumour types and tests. Preanalytical considerations are important, and careful consideration of the way in which specimens are obtained and reach the laboratory is necessary. Sample receipt and handling follow standard operating procedures, but some alterations may be necessary if molecular testing is to be performed, for instance to control tissue fixation. DNA and RNA extraction can be standardised and should be checked for quality and quantity of output on a regular basis. The choice of analytical method(s) depends on clinical requirements, desired turnaround time, and expertise available. Internal quality control, regular internal audit of the whole testing process, laboratory accreditation, and continual participation in external quality assessment schemes are prerequisites for delivery of a reliable service. A molecular pathology report should accurately convey the information the clinician needs to treat the patient with sufficient information to allow for correct interpretation of the result. Molecular pathology is developing rapidly, and further detailed evidence-based recommendations are required for many of the topics covered here. PMID:25012948
Error analysis of numerical gravitational waveforms from coalescing binary black holes
NASA Astrophysics Data System (ADS)
Fong, Heather; Chu, Tony; Kumar, Prayush; Pfeiffer, Harald; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela; SXS Collaboration
2016-03-01
The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) has finished a successful first observation run and will commence its second run this summer. Detection of compact object binaries utilizes matched-filtering, which requires a vast collection of highly accurate gravitational waveforms. This talk will present a set of about 100 new aligned-spin binary black hole simulations. I will discuss their properties, including a detailed error analysis, which demonstrates that the numerical waveforms are sufficiently accurate for gravitational wave detection purposes, as well as for parameter estimation purposes.
Oxygen index: An approximate value for the evaluation of combustion characteristics
NASA Technical Reports Server (NTRS)
Zartmann, I.; Reinwardt, D.; Franke, A.
1986-01-01
The oxygen index has gained international recognition for the determination of combustion characteristics of plastic material. The amounts of oxygen and nitrogen were more accurately determined for existing test equipment in order to specify the oxygen index as precisely and as reproducible as possible. Parameters are outlined such as the size of the ignition flame, ignition of test pieces, test piece sizes and test temperature. The minimum oxygen index was determined by the dimension and duration of the fire. The results are sufficiently accurate for factory operating conditions and are also reproducible.
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong
2013-02-01
A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.
Computed Potential Energy Surfaces and Minimum Energy Pathways for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)
1994-01-01
Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. For some dynamics methods, global potential energy surfaces are required. In this case, it is necessary to obtain the energy at a complete sampling of all the possible arrangements of the nuclei, which are energetically accessible, and then a fitting function must be obtained to interpolate between the computed points. In other cases, characterization of the stationary points and the reaction pathway connecting them is sufficient. These properties may be readily obtained using analytical derivative methods. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives usefull results for a number of chemically important systems. The talk will focus on a number of applications including global potential energy surfaces, H + O2, H + N2, O(3p) + H2, and reaction pathways for complex reactions, including reactions leading to NO and soot formation in hydrocarbon combustion.
Selection and authentication of botanical materials for the development of analytical methods.
Applequist, Wendy L; Miller, James S
2013-05-01
Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Farmer, Donald A.
1998-01-01
Abrasive cut-off wheels are at times unintentionally manufactured with nonuniformity that is difficult to identify and sufficiently characterize without time-consuming, destructive examination. One particular nonuniformity is a density variation condition occurring around the wheel circumference or along the radius, or both. This density variation, depending on its severity, can cause wheel warpage and wheel vibration resulting in unacceptable performance and perhaps premature failure of the wheel. Conventional nondestructive evaluation methods such as ultrasonic c-scan imaging and film radiography are inaccurate in their attempts at characterizing the density variation because a superimposing thickness variation exists as well in the wheel. In this article, the single transducer thickness-independent ultrasonic imaging method, developed specifically to allow more accurate characterization of aerospace components, is shown to precisely characterize the extent of the density variation in a cut-off wheel having a superimposing thickness variation. The method thereby has potential as an effective quality control tool in the abrasives industry for the wheel manufacturer.
14 CFR 23.1551 - Oil quantity indicator.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Oil quantity indicator. 23.1551 Section 23... Information Markings and Placards § 23.1551 Oil quantity indicator. Each oil quantity indicator must be marked in sufficient increments to indicate readily and accurately the quantity of oil. ...
14 CFR 23.1551 - Oil quantity indicator.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Oil quantity indicator. 23.1551 Section 23... Information Markings and Placards § 23.1551 Oil quantity indicator. Each oil quantity indicator must be marked in sufficient increments to indicate readily and accurately the quantity of oil. ...
14 CFR 23.1551 - Oil quantity indicator.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Oil quantity indicator. 23.1551 Section 23... Information Markings and Placards § 23.1551 Oil quantity indicator. Each oil quantity indicator must be marked in sufficient increments to indicate readily and accurately the quantity of oil. ...
Description of a Sensitive Seebeck Calorimeter Used for Cold Fusion Studies
NASA Astrophysics Data System (ADS)
Storms, Edmund
A sensitive and stable Seebeck calorimeter is described and used to determine the heat of formation of PdD. This determination can be used to show that such calorimeters are sufficiently accurate to measure the LENR effect and give support to the claims.
14 CFR 23.1551 - Oil quantity indicator.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Oil quantity indicator. 23.1551 Section 23... Information Markings and Placards § 23.1551 Oil quantity indicator. Each oil quantity indicator must be marked in sufficient increments to indicate readily and accurately the quantity of oil. ...
Improving Angles-Only Navigation Performance by Selecting Sufficiently Accurate Accelerometers
2009-08-01
controller for thrusters and a PID controller for momentum Wheels. Translational control leverages a PD controller for station keeping, and Clohessy ... Wiltshire (CW) equations targeting for trans- fers. Navigation is detailed in Section III.A. III.A. Kalman Filter Development A Square-Root EKF is
Topping, David J.; Wright, Scott A.; Griffiths, Ronald; Dean, David
2014-01-01
As the result of a 12-year program of sediment-transport research and field testing on the Colorado River (6 stations in UT and AZ), Yampa River (2 stations in CO), Little Snake River (1 station in CO), Green River (1 station in CO and 2 stations in UT), and Rio Grande (2 stations in TX), we have developed a physically based method for measuring suspended-sediment concentration and grain size at 15-minute intervals using multifrequency arrays of acoustic-Doppler profilers. This multi-frequency method is able to achieve much higher accuracies than single-frequency acoustic methods because it allows removal of the influence of changes in grain size on acoustic backscatter. The method proceeds as follows. (1) Acoustic attenuation at each frequency is related to the concentration of silt and clay with a known grain-size distribution in a river cross section using physical samples and theory. (2) The combination of acoustic backscatter and attenuation at each frequency is uniquely related to the concentration of sand (with a known reference grain-size distribution) and the concentration of silt and clay (with a known reference grain-size distribution) in a river cross section using physical samples and theory. (3) Comparison of the suspended-sand concentrations measured at each frequency using this approach then allows theory-based calculation of the median grain size of the suspended sand and final correction of the suspended-sand concentration to compensate for the influence of changing grain size on backscatter. Although this method of measuring suspended-sediment concentration is somewhat less accurate than using conventional samplers in either the EDI or EWI methods, it is much more accurate than estimating suspended-sediment concentrations using calibrated pump measurements or single-frequency acoustics. Though the EDI and EWI methods provide the most accurate measurements of suspended-sediment concentration, these measurements are labor-intensive, expensive, and may be impossible to collect at time intervals less than discharge-independent changes in suspended-sediment concentration can occur (< hours). Therefore, our physically based multi-frequency acoustic method shows promise as a cost-effective, valid approach for calculating suspended-sediment loads in river at a level of accuracy sufficient for many scientific and management purposes.
Telemedicine Consultations in Oral and Maxillofacial Surgery: A Follow-Up Study.
Wood, Eric W; Strauss, Robert A; Janus, Charles; Carrico, Caroline K
2016-02-01
The purpose of this study was to follow up on the previous study in evaluating the efficiency and reliability of telemedicine consultations for preoperative assessment of patients. A retrospective study of 335 patients over a 6-year period was performed to evaluate success rates of telemedicine consultations in adequately assessing patients for surgical treatment under anesthesia. Success or failure of the telemedicine consultation was measured by the ability to triage patients appropriately for the hospital operating room versus the clinic, to provide an accurate diagnosis and treatment plan, and to provide a sufficient medical and physical assessment for planned anesthesia. Data gathered from the average distance traveled and data from a previous telemedicine study performed by the National Institute of Justice were used to estimate the cost savings of using telemedicine consultations over the 6-year period. Practitioners performing the consultation were successful 92.2% of the time in using the data collected to make a diagnosis and treatment plan. Patients were triaged correctly 99.6% of the time for the clinic or hospital operating room. Most patients (98.0%) were given sufficient medical and physical assessment and were able to undergo surgery with anesthesia as planned at the clinic appointment immediately after telemedicine consultation. Most patients (95.9%) were given an accurate diagnosis and treatment plan. The estimated amount saved by providing consultation by telemedicine and eliminating in-office consultation was substantial at $134,640. This study confirms the findings from previous studies that telemedicine consultations are as reliable as those performed by traditional methods. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Read clouds uncover variation in complex regions of the human genome
Bishara, Alex; Liu, Yuling; Weng, Ziming; Kashef-Haghighi, Dorna; Newburger, Daniel E.; West, Robert; Sidow, Arend; Batzoglou, Serafim
2015-01-01
Although an increasing amount of human genetic variation is being identified and recorded, determining variants within repeated sequences of the human genome remains a challenge. Most population and genome-wide association studies have therefore been unable to consider variation in these regions. Core to the problem is the lack of a sequencing technology that produces reads with sufficient length and accuracy to enable unique mapping. Here, we present a novel methodology of using read clouds, obtained by accurate short-read sequencing of DNA derived from long fragment libraries, to confidently align short reads within repeat regions and enable accurate variant discovery. Our novel algorithm, Random Field Aligner (RFA), captures the relationships among the short reads governed by the long read process via a Markov Random Field. We utilized a modified version of the Illumina TruSeq synthetic long-read protocol, which yielded shallow-sequenced read clouds. We test RFA through extensive simulations and apply it to discover variants on the NA12878 human sample, for which shallow TruSeq read cloud sequencing data are available, and on an invasive breast carcinoma genome that we sequenced using the same method. We demonstrate that RFA facilitates accurate recovery of variation in 155 Mb of the human genome, including 94% of 67 Mb of segmental duplication sequence and 96% of 11 Mb of transcribed sequence, that are currently hidden from short-read technologies. PMID:26286554
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-11-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-01-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857
Steps for the autologous ex vivo perfused porcine liver-kidney experiment.
Chung, Wen Yuan; Eltweri, Amar M; Isherwood, John; Haqq, Jonathan; Ong, Seok Ling; Gravante, Gianpiero; Lloyd, David M; Metcalfe, Matthew S; Dennison, Ashley R
2013-12-18
The use of ex vivo perfused models can mimic the physiological conditions of the liver for short periods, but to maintain normal homeostasis for an extended perfusion period is challenging. We have added the kidney to our previous ex vivo perfused liver experiment model to reproduce a more accurate physiological state for prolonged experiments without using live animals. Five intact livers and kidneys were retrieved post-mortem from sacrificed pigs on different days and perfused for a minimum of 6 hr. Hourly arterial blood gases were obtained to analyze pH, lactate, glucose and renal parameters. The primary endpoint was to investigate the effect of adding one kidney to the model on the acid base balance, glucose, and electrolyte levels. The result of this liver-kidney experiment was compared to the results of five previous liver only perfusion models. In summary, with the addition of one kidney to the ex vivo liver circuit, hyperglycemia and metabolic acidosis were improved. In addition this model reproduces the physiological and metabolic responses of the liver sufficiently accurately to obviate the need for the use of live animals. The ex vivo liver-kidney perfusion model can be used as an alternative method in organ specific studies. It provides a disconnection from numerous systemic influences and allows specific and accurate adjustments of arterial and venous pressures and flow.
Clabbers, M T B; van Genderen, E; Wan, W; Wiegers, E L; Gruene, T; Abrahams, J P
2017-09-01
Three-dimensional nanometre-sized crystals of macromolecules currently resist structure elucidation by single-crystal X-ray crystallography. Here, a single nanocrystal with a diffracting volume of only 0.14 µm 3 , i.e. no more than 6 × 10 5 unit cells, provided sufficient information to determine the structure of a rare dimeric polymorph of hen egg-white lysozyme by electron crystallography. This is at least an order of magnitude smaller than was previously possible. The molecular-replacement solution, based on a monomeric polyalanine model, provided sufficient phasing power to show side-chain density, and automated model building was used to reconstruct the side chains. Diffraction data were acquired using the rotation method with parallel beam diffraction on a Titan Krios transmission electron microscope equipped with a novel in-house-designed 1024 × 1024 pixel Timepix hybrid pixel detector for low-dose diffraction data collection. Favourable detector characteristics include the ability to accurately discriminate single high-energy electrons from X-rays and count them, fast readout to finely sample reciprocal space and a high dynamic range. This work, together with other recent milestones, suggests that electron crystallography can provide an attractive alternative in determining biological structures.
Protein structure determination by electron diffraction using a single three-dimensional nanocrystal
Clabbers, M. T. B.; van Genderen, E.; Wiegers, E. L.; Gruene, T.; Abrahams, J. P.
2017-01-01
Three-dimensional nanometre-sized crystals of macromolecules currently resist structure elucidation by single-crystal X-ray crystallography. Here, a single nanocrystal with a diffracting volume of only 0.14 µm3, i.e. no more than 6 × 105 unit cells, provided sufficient information to determine the structure of a rare dimeric polymorph of hen egg-white lysozyme by electron crystallography. This is at least an order of magnitude smaller than was previously possible. The molecular-replacement solution, based on a monomeric polyalanine model, provided sufficient phasing power to show side-chain density, and automated model building was used to reconstruct the side chains. Diffraction data were acquired using the rotation method with parallel beam diffraction on a Titan Krios transmission electron microscope equipped with a novel in-house-designed 1024 × 1024 pixel Timepix hybrid pixel detector for low-dose diffraction data collection. Favourable detector characteristics include the ability to accurately discriminate single high-energy electrons from X-rays and count them, fast readout to finely sample reciprocal space and a high dynamic range. This work, together with other recent milestones, suggests that electron crystallography can provide an attractive alternative in determining biological structures. PMID:28876237
NASA Technical Reports Server (NTRS)
Bergrun, N. R.
1951-01-01
An empirical method for the determination of the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The procedure represents an initial step toward the development of a method which is generally applicable in the design of thermal ice-prevention equipment for airplane wing and tail surfaces. Results given by the proposed empirical method are expected to be sufficiently accurate for the purpose of heated-wing design, and can be obtained from a few numerical computations once the velocity distribution over the airfoil has been determined. The empirical method presented for incompressible flow is based on results of extensive water-drop. trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer. The method developed for incompressible flow is extended to the calculation of area and rate of impingement on straight wings in subsonic compressible flow to indicate the probable effects of compressibility for airfoils at low subsonic Mach numbers.
Accurate tumor localization and tracking in radiation therapy using wireless body sensor networks.
Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark
2014-07-01
Radiation therapy is an effective method to combat cancerous tumors by killing the malignant cells or controlling their growth. Knowing the exact position of the tumor is a very critical prerequisite in radiation therapy. Since the position of the tumor changes during the process of radiation therapy due to the patient׳s movements and respiration, a real-time tumor tracking method is highly desirable in order to deliver a sufficient dose of radiation to the tumor region without damaging the surrounding healthy tissues. In this paper, we develop a novel tumor positioning method based on spatial sparsity. We estimate the position by processing the received signals from only one implantable RF transmitter. The proposed method uses less number of sensors compared to common magnetic transponder based approaches. The performance of the proposed method is evaluated in two different cases: (1) when the tissue configuration is perfectly determined (acquired beforehand by MRI or CT) and (2) when there are some uncertainties about the tissue boundaries. The results demonstrate the high accuracy and performance of the proposed method, even when the tissue boundaries are imperfectly known. Copyright © 2014 Elsevier Ltd. All rights reserved.
Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres
NASA Astrophysics Data System (ADS)
Judge, Philip G.
2017-12-01
We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
NASA Astrophysics Data System (ADS)
Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro
2017-02-01
Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.
Predicting shrinkage and warpage in injection molding: Towards automatized mold design
NASA Astrophysics Data System (ADS)
Zwicke, Florian; Behr, Marek; Elgeti, Stefanie
2017-10-01
It is an inevitable part of any plastics molding process that the material undergoes some shrinkage during solidification. Mainly due to unavoidable inhomogeneities in the cooling process, the overall shrinkage cannot be assumed as homogeneous in all volumetric directions. The direct consequence is warpage. The accurate prediction of such shrinkage and warpage effects has been the subject of a considerable amount of research, but it is important to note that this behavior depends greatly on the type of material that is used as well as the process details. Without limiting ourselves to any specific properties of certain materials or process designs, we aim to develop a method for the automatized design of a mold cavity that will produce correctly shaped moldings after solidification. Essentially, this can be stated as a shape optimization problem, where the cavity shape is optimized to fulfill some objective function that measures defects in the molding shape. In order to be able to develop and evaluate such a method, we first require simulation methods for the diffierent steps involved in the injection molding process that can represent the phenomena responsible for shrinkage and warpage ina sufficiently accurate manner. As a starting point, we consider the solidification of purely amorphous materials. In this case, the material slowly transitions from fluid-like to solid-like behavior as it cools down. This behavior is modeled using adjusted viscoelastic material models. Once the material has passed a certain temperature threshold during cooling, any viscous effects are neglected and the behavior is assumed to be fully elastic. Non-linear elastic laws are used to predict shrinkage and warpage that occur after this point. We will present the current state of these simulation methods and show some first approaches towards optimizing the mold cavity shape based on these methods.
Solving large scale structure in ten easy steps with COLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less
Microorganism Identification Based On MALDI-TOF-MS Fingerprints
NASA Astrophysics Data System (ADS)
Elssner, Thomas; Kostrzewa, Markus; Maier, Thomas; Kruppa, Gary
Advances in MALDI-TOF mass spectrometry have enabled the development of a rapid, accurate and specific method for the identification of bacteria directly from colonies picked from culture plates, which we have named the MALDI Biotyper. The picked colonies are placed on a target plate, a drop of matrix solution is added, and a pattern of protein molecular weights and intensities, "the protein fingerprint" of the bacteria, is produced by the MALDI-TOF mass spectrometer. The obtained protein mass fingerprint representing a molecular signature of the microorganism is then matched against a database containing a library of previously measured protein mass fingerprints, and scores for the match to every library entry are produced. An ID is obtained if a score is returned over a pre-set threshold. The sensitivity of the techniques is such that only approximately 104 bacterial cells are needed, meaning that an overnight culture is sufficient, and the results are obtained in minutes after culture. The improvement in time to result over biochemical methods, and the capability to perform a non-targeted identification of bacteria and spores, potentially makes this method suitable for use in the detect-to-treat timeframe in a bioterrorism event. In the case of white-powder samples, the infectious spore is present in sufficient quantity in the powder so that the MALDI Biotyper result can be obtained directly from the white powder, without the need for culture. While spores produce very different patterns from the vegetative colonies of the corresponding bacteria, this problem is overcome by simply including protein fingerprints of the spores in the library. Results on spores can be returned within minutes, making the method suitable for use in the "detect-to-protect" timeframe.
ESTUARINE-OCEAN EXCHANGE IN A NORTH PACIFIC ESTUARY: COMPARISON OF STEADY STATE AND DYNAMIC MODELS
Nutrient levels in coastal waters must be accurately assessed to determine the nutrient effects of increasing populations on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources and sinks, and to ul...
A Tale of Two Representations: The Misinformation Effect and Children's Developing Theory of Mind.
ERIC Educational Resources Information Center
Templeton, Leslie M.; Wilcox, Sharon A.
2000-01-01
Investigated children's representational ability as a cognitive factor underlying the suggestibility of their eyewitness memory. Found that the eyewitness memory of children lacking multirepresentational abilities or sufficient general memory abilities (most 3- and 4-year-olds) was less accurate than eyewitness memory of those with…
Effective environmental policy decisions benefit from stream habitat information that is accurate, precise, and relevant. The recent National Wadeable Streams Assessment (NWSA) carried out by the U.S. EPA required physical habitat information sufficiently comprehensive to facilit...
Initial Results From the USNO Dispersed Fourier Transform Spectrograph
2007-01-25
the full instrument bandpass. 5.2. k Andromedae and Geminorum To test whether the dFTS system can accurately detect RV variations in a stellar...prototype dFTS can measure stellar RVs with sufficient accuracy to find exoplanets. We also observed Andromedae (a three-planet system) and
Accurate assessments of nutrient levels in coastal waters are required to determine the nutrient effects of increasing population pressure on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources an...
Accurate assessments of nutrient levels in coastal waters are required to determine the nutrient effects of increasing population pressure on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources and...
NREL: International Activities - Bilateral Partnerships
development and use of renewable energy and energy efficiency technologies: Algeria Angola Argentina Australia sufficiently accurate information for national-level strategic energy planning. China NREL manages renewable energy cooperation with China under the U.S.-China Renewable Energy Partnership program. This program was
21 CFR 113.87 - Operations in the thermal processing room.
Code of Federal Regulations, 2013 CFR
2013-04-01
... (CONTINUED) FOOD FOR HUMAN CONSUMPTION THERMALLY PROCESSED LOW-ACID FOODS PACKAGED IN HERMETICALLY SEALED... should be made. (c) The initial temperature of the contents of the containers to be processed shall be accurately determined and recorded with sufficient frequency to ensure that the temperature of the product is...
21 CFR 113.87 - Operations in the thermal processing room.
Code of Federal Regulations, 2014 CFR
2014-04-01
... (CONTINUED) FOOD FOR HUMAN CONSUMPTION THERMALLY PROCESSED LOW-ACID FOODS PACKAGED IN HERMETICALLY SEALED... should be made. (c) The initial temperature of the contents of the containers to be processed shall be accurately determined and recorded with sufficient frequency to ensure that the temperature of the product is...
NASA Astrophysics Data System (ADS)
Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.
2017-10-01
An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a semi-automatic workflow facilitating the introduction of an MR-only workflow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, S; Belley, M; Benning, R
2014-06-15
Purpose: Pre-clinical micro-radiation therapy studies often utilize very small beams (∼0.5-5mm), and require accurate dose delivery in order to effectively investigate treatment efficacy. Here we present a novel high-resolution absolute 3D dosimetry procedure, capable of ∼100-micron isotopic dosimetry in anatomically accurate rodent-morphic phantoms Methods: Anatomically accurate rat-shaped 3D dosimeters were made using 3D printing techniques from outer body contours and spinal contours outlined on CT. The dosimeters were made from a radiochromic plastic material PRESAGE, and incorporated high-Z PRESASGE inserts mimicking the spine. A simulated 180-degree spinal arc treatment was delivered through a 2 step process: (i) cone-beam-CT image-guided positioningmore » was performed to precisely position the rat-dosimeter for treatment on the XRad225 small animal irradiator, then (ii) treatment was delivered with a simulated spine-treatment with a 180-degree arc with 20mm x 10mm cone at 225 kVp. Dose distribution was determined from the optical density change using a high-resolution in-house optical-CT system. Absolute dosimetry was enabled through calibration against a novel nano-particle scintillation detector positioned in a channel in the center of the distribution. Results: Sufficient contrast between regular PRESAGE (tissue equivalent) and high-Z PRESAGE (spinal insert) was observed to enable highly accurate image-guided alignment and targeting. The PRESAGE was found to have linear optical density (OD) change sensitivity with respect to dose (R{sup 2} = 0.9993). Absolute dose for 360-second irradiation at isocenter was found to be 9.21Gy when measured with OD change, and 9.4Gy with nano-particle detector- an agreement within 2%. The 3D dose distribution was measured at 500-micron resolution Conclusion: This work demonstrates for the first time, the feasibility of accurate absolute 3D dose measurement in anatomically accurate rat phantoms containing variable density PRESAGE material (tissue equivalent and bone equivalent). This method enables precise treatment verification of micro-radiation therapies, and enhances the robustness of tumor radio-response studies. This work was supported by NIH R01CA100835.« less
Kefal, Adnan; Yildiz, Mehmet
2017-11-30
This paper investigated the effect of sensor density and alignment for three-dimensional shape sensing of an airplane-wing-shaped thick panel subjected to three different loading conditions, i.e., bending, torsion, and membrane loads. For shape sensing analysis of the panel, the Inverse Finite Element Method (iFEM) was used together with the Refined Zigzag Theory (RZT), in order to enable accurate predictions for transverse deflection and through-the-thickness variation of interfacial displacements. In this study, the iFEM-RZT algorithm is implemented by utilizing a novel three-node C°-continuous inverse-shell element, known as i3-RZT. The discrete strain data is generated numerically through performing a high-fidelity finite element analysis on the wing-shaped panel. This numerical strain data represents experimental strain readings obtained from surface patched strain gauges or embedded fiber Bragg grating (FBG) sensors. Three different sensor placement configurations with varying density and alignment of strain data were examined and their corresponding displacement contours were compared with those of reference solutions. The results indicate that a sparse distribution of FBG sensors (uniaxial strain measurements), aligned in only the longitudinal direction, is sufficient for predicting accurate full-field membrane and bending responses (deformed shapes) of the panel, including a true zigzag representation of interfacial displacements. On the other hand, a sparse deployment of strain rosettes (triaxial strain measurements) is essentially enough to produce torsion shapes that are as accurate as those of predicted by a dense sensor placement configuration. Hence, the potential applicability and practical aspects of i3-RZT/iFEM methodology is proven for three-dimensional shape-sensing of future aerospace structures.
Precise relative navigation using augmented CDGPS
NASA Astrophysics Data System (ADS)
Park, Chan-Woo
2001-10-01
Autonomous formation flying of multiple vehicles is a revolutionary enabling technology for many future space and earth science missions that require distributed measurements, such as sparse aperture radars and stellar interferometry. The techniques developed for the space applications will also have a significant impact on many terrestrial formation flying missions. One of the key requirements of formation flying is accurate knowledge of the relative positions and velocities between the vehicles. Several researchers have shown that the GPS is a viable sensor to perform this relative navigation. However, there are several limitations in the use of GPS because it requires adequate visibility to the NAVSTAR constellation. For some mission scenarios, such as MEO, GEO and tight formation missions, the visibility/geometry of the constellation may not be sufficient to accurately estimate the relative states. One solution to these problems is to include an RF ranging device onboard the vehicles in the formation and form a local constellation that augments the existing NAVSTAR constellation. These local range measurements, combined with the GPS measurements, can provide a sufficient number of measurements and adequate geometry to solve for the relative states. Furthermore, these RF ranging devices can be designed to provide substantially more accurate measures of the vehicle relative states than the traditional GPS pseudolites. The local range measurements also allow relative vehicle motion to be used to efficiently solve for the cycle ambiguities in real-time. This dissertation presents the development of an onboard ranging sensor and the extension of several related algorithms for a formation of vehicles with both GPS and local transmitters. Key among these are a robust cycle ambiguity estimation method and a decentralized relative navigation filter. The efficient decentralized approach to the GPS-only relative navigation problem is extended to an iterative cascade extended Kalman filtering (ICEKF) algorithm when the vehicles have onboard transmitters. Several ground testbeds were developed to demonstrate the feasibility of the augmentation concept and the relative navigation algorithms. The testbed includes the Stanford Pseudolite Transceiver Crosslink (SPTC), which was developed and extensively tested with a formation of outdoor ground vehicles.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
A greedy, graph-based algorithm for the alignment of multiple homologous gene lists.
Fostier, Jan; Proost, Sebastian; Dhoedt, Bart; Saeys, Yvan; Demeester, Piet; Van de Peer, Yves; Vandepoele, Klaas
2011-03-15
Many comparative genomics studies rely on the correct identification of homologous genomic regions using accurate alignment tools. In such case, the alphabet of the input sequences consists of complete genes, rather than nucleotides or amino acids. As optimal multiple sequence alignment is computationally impractical, a progressive alignment strategy is often employed. However, such an approach is susceptible to the propagation of alignment errors in early pairwise alignment steps, especially when dealing with strongly diverged genomic regions. In this article, we present a novel accurate and efficient greedy, graph-based algorithm for the alignment of multiple homologous genomic segments, represented as ordered gene lists. Based on provable properties of the graph structure, several heuristics are developed to resolve local alignment conflicts that occur due to gene duplication and/or rearrangement events on the different genomic segments. The performance of the algorithm is assessed by comparing the alignment results of homologous genomic segments in Arabidopsis thaliana to those obtained by using both a progressive alignment method and an earlier graph-based implementation. Especially for datasets that contain strongly diverged segments, the proposed method achieves a substantially higher alignment accuracy, and proves to be sufficiently fast for large datasets including a few dozens of eukaryotic genomes. http://bioinformatics.psb.ugent.be/software. The algorithm is implemented as a part of the i-ADHoRe 3.0 package.
Computing the total atmospheric refraction for real-time optical imaging sensor simulation
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2015-05-01
Fast and accurate computation of light path deviation due to atmospheric refraction is an important requirement for real-time simulation of optical imaging sensor systems. A large body of existing literature covers various methods for application of Snell's Law to the light path ray tracing problem. This paper provides a discussion of the adaptation to real time simulation of atmospheric refraction ray tracing techniques used in mid-1980's LOWTRAN releases. The refraction ray trace algorithm published in a LOWTRAN-6 technical report by Kneizys (et. al.) has been coded in MATLAB for development, and in C-language for simulation use. To this published algorithm we have added tuning parameters for variable path segment lengths, and extensions for Earth grazing and exoatmospheric "near Earth" ray paths. Model atmosphere properties used to exercise the refraction algorithm were obtained from tables published in another LOWTRAN-6 related report. The LOWTRAN-6 based refraction model is applicable to atmospheric propagation at wavelengths in the IR and visible bands of the electromagnetic spectrum. It has been used during the past two years by engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) in support of several advanced imaging sensor simulations. Recently, a faster (but sufficiently accurate) method using Gauss-Chebyshev Quadrature integration for evaluating the refraction integral was adopted.
Itsukage, Shizu; Sowa, Yoshihiro; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki
2017-01-01
Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes' principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging.
Itsukage, Shizu; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki
2017-01-01
Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes’ principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging. PMID:29308107
Multi-Group Reductions of LTE Air Plasma Radiative Transfer in Cylindrical Geometries
NASA Technical Reports Server (NTRS)
Scoggins, James; Magin, Thierry Edouard Bertran; Wray, Alan; Mansour, Nagi N.
2013-01-01
Air plasma radiation in Local Thermodynamic Equilibrium (LTE) within cylindrical geometries is studied with an application towards modeling the radiative transfer inside arc-constrictors, a central component of constricted-arc arc jets. A detailed database of spectral absorption coefficients for LTE air is formulated using the NEQAIR code developed at NASA Ames Research Center. The database stores calculated absorption coefficients for 1,051,755 wavelengths between 0.04 µm and 200 µm over a wide temperature (500K to 15 000K) and pressure (0.1 atm to 10.0 atm) range. The multi-group method for spectral reduction is studied by generating a range of reductions including pure binning and banding reductions from the detailed absorption coefficient database. The accuracy of each reduction is compared to line-by-line calculations for cylindrical temperature profiles resembling typical profiles found in arc-constrictors. It is found that a reduction of only 1000 groups is sufficient to accurately model the LTE air radiation over a large temperature and pressure range. In addition to the reduction comparison, the cylindrical-slab formulation is compared with the finite-volume method for the numerical integration of the radiative flux inside cylinders with varying length. It is determined that cylindrical-slabs can be used to accurately model most arc-constrictors due to their high length to radius ratios.
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.
NASA Technical Reports Server (NTRS)
Morehouse, Melissa B.
2001-01-01
A study is being conducted to improve the propulsion/airframe integration for the Blended Wing-Body (BWB) configuration with boundary layer ingestion nacelles. TWO unstructured grid flow solvers, USM3D and FUN3D, have been coupled with different design methods and are being used to redesign the aft wing region and the nacelles to reduce drag and flow separation. An initial study comparing analyses from these two flow solvers against data from a wind tunnel test as well as predictions from the OVERFLOW structured grid code for a BWB without nacelles has been completed. Results indicate that the unstructured grid codes are sufficiently accurate for use in design. Results from the BWB design study will be presented.
Li, Fumin; Ewles, Matthew; Pelzer, Mary; Brus, Theodore; Ledvina, Aaron; Gray, Nicholas; Koupaei-Abyazani, Mohammad; Blackburn, Michael
2013-10-01
Achieving sufficient selectivity in bioanalysis is critical to ensure accurate quantitation of drugs and metabolites in biological matrices. Matrix effects most classically refer to modification of ionization efficiency of an analyte in the presence of matrix components. However, nonanalyte or matrix components present in samples can adversely impact the performance of a bioanalytical method and are broadly considered as matrix effects. For the current manuscript, we expand the scope to include matrix elements that contribute to isobaric interference and measurement bias. These three categories of matrix effects are illustrated with real examples encountered. The causes, symptoms, and suggested strategies and resolutions for each form of matrix effects are discussed. Each case is presented in the format of situation/action/result to facilitate reading.
Determination of 15N/14N and 13C/12C in Solid and Aqueous Cyanides
Johnson, C.A.
1996-01-01
The stable isotopic compositions of nitrogen and carbon in cyanide compounds can be determined by combusting aliquots in sealed tubes to form N2 gas and CO2 gas and analyzing the gases by mass spectrometry. Free cyanide (CN-aq + HCNaq) in simple solutions can also be analyzed by first precipitating the cyanide as copper(II) ferrocyanide and then combusting the precipitate. Reproducibility is ??0.5??? or better for both ??15N and ??13C. If empirical corrections are made on the basis of carbon yields, the reproducibility of ??13C can be improved to ??0.2???. The analytical methods described herein are sufficiently accurate and precise to apply stable isotope techniques to problems of cyanide degradation in natural waters and industrial process solutions.
Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany; Gallo, Giulia; Brinkman, Gregory
Revenue insufficiency, or the missing money problem, occurs when the revenues that generators earn from the market are not sufficient to cover both fixed and variable costs to remain in the market and/or justify investments in new capacity, which may be needed for reliability. The near-zero marginal cost of variable renewable generators further exacerbates these revenue challenges. Estimating the extent of the missing money problem in current electricity markets is an important, nontrivial task that requires representing both how the power system operates and how market participants behave. This paper explores the missing money problem using a production cost modelmore » that represented a simplified version of the Electric Reliability Council of Texas (ERCOT) energy-only market for the years 2012-2014. We evaluate how various market structures -- including market behavior, ancillary services, and changing fleet compositions -- affect net revenues in this ERCOT-like system. In most production cost modeling exercises, resources are assumed to offer their marginal capabilities at marginal costs. Although this assumption is reasonable for feasibility studies and long-term planning, it does not adequately consider the market behaviors that impact revenue sufficiency. In this work, we simulate a limited set of market participant strategic bidding behaviors by means of different sets of markups; these markups are applied to the true production costs of all gas generators, which are the most prominent generators in ERCOT. Results show that markups can help generators increase their net revenues overall, although net revenues may increase or decrease depending on the technology and the year under study. Results also confirm that conventional, variable-cost-based production cost simulations do not capture prices accurately, and this particular feature calls for proxies for strategic behaviors (e.g., markups) and more accurate representations of how electricity markets work. The analysis also shows that generators face revenue sufficiency challenges in this ERCOT-like energy-only market model; net revenues provided by the market in all base markup cases and sensitivity scenarios (except when a large fraction of the existing coal fleet is retired) are not sufficient to justify investments in new capacity for thermal and nuclear power units. Overall, the work described in this paper points to the need for improved behavioral models of electricity markets to more accurately study current and potential market design issues that could arise in systems with high penetrations of renewable generation.« less
Wide-range radioactive-gas-concentration detector
Anderson, D.F.
1981-11-16
A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
Kaakinen, M; Huttunen, S; Paavolainen, L; Marjomäki, V; Heikkilä, J; Eklund, L
2014-01-01
Phase-contrast illumination is simple and most commonly used microscopic method to observe nonstained living cells. Automatic cell segmentation and motion analysis provide tools to analyze single cell motility in large cell populations. However, the challenge is to find a sophisticated method that is sufficiently accurate to generate reliable results, robust to function under the wide range of illumination conditions encountered in phase-contrast microscopy, and also computationally light for efficient analysis of large number of cells and image frames. To develop better automatic tools for analysis of low magnification phase-contrast images in time-lapse cell migration movies, we investigated the performance of cell segmentation method that is based on the intrinsic properties of maximally stable extremal regions (MSER). MSER was found to be reliable and effective in a wide range of experimental conditions. When compared to the commonly used segmentation approaches, MSER required negligible preoptimization steps thus dramatically reducing the computation time. To analyze cell migration characteristics in time-lapse movies, the MSER-based automatic cell detection was accompanied by a Kalman filter multiobject tracker that efficiently tracked individual cells even in confluent cell populations. This allowed quantitative cell motion analysis resulting in accurate measurements of the migration magnitude and direction of individual cells, as well as characteristics of collective migration of cell groups. Our results demonstrate that MSER accompanied by temporal data association is a powerful tool for accurate and reliable analysis of the dynamic behaviour of cells in phase-contrast image sequences. These techniques tolerate varying and nonoptimal imaging conditions and due to their relatively light computational requirements they should help to resolve problems in computationally demanding and often time-consuming large-scale dynamical analysis of cultured cells. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Sufficient Forecasting Using Factor Models
Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei
2017-01-01
We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537
A source-attractor approach to network detection of radiation sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Barry, M. L..; Grieme, M.
Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less
The complex phase gradient method applied to leaky Lamb waves.
Lenoir, O; Conoir, J M; Izbicki, J L
2002-10-01
The classical phase gradient method applied to the characterization of the angular resonances of an immersed elastic plate, i.e., the angular poles of its reflection coefficient R, was proved to be efficient when their real parts are close to the real zeros of R and their imaginary parts are not too large compared to their real parts. This method consists of plotting the partial reflection coefficient phase derivative with respect to the sine of the incidence angle, considered as real, versus incidence angle. In the vicinity of a resonance, this curve exhibits a Breit-Wigner shape, whose minimum is located at the pole real part and whose amplitude is the inverse of its imaginary part. However, when the imaginary part is large, this method is not sufficiently accurate compared to the exact calculation of the complex angular root. An improvement of this method consists of plotting, in 3D, in the complex angle plane and at a given frequency, the angular phase derivative with respect to the real part of the sine of the incidence angle, considered as complex. When the angular pole is reached, the 3D curve shows a clear-cut transition whose position is easily obtained.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
NASA Astrophysics Data System (ADS)
Sukmono, Abdi; Ardiansyah
2017-01-01
Paddy is one of the most important agricultural crop in Indonesia. Indonesia’s consumption of rice per capita in 2013 amounted to 78,82 kg/capita/year. In 2017, the Indonesian government has the mission of realizing Indonesia became self-sufficient in food. Therefore, the Indonesian government should be able to seek the stability of the fulfillment of basic needs for food, such as rice field mapping. The accurate mapping for rice field can use a quick and easy method such as Remote Sensing. In this study, multi-temporal Landsat 8 are used for identification of rice field based on Rice Planting Time. It was combined with other method for extract information from the imagery. The methods which was used Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA) and band combination. Image classification is processed by using nine classes, those are water, settlements, mangrove, gardens, fields, rice fields 1st, rice fields 2nd, rice fields 3rd and rice fields 4th. The results showed the rice fields area obtained from the PCA method was 50,009 ha, combination bands was 51,016 ha and NDVI method was 45,893 ha. The accuracy level was obtained PCA method (84.848%), band combination (81.818%), and NDVI method (75.758%).
NASA Astrophysics Data System (ADS)
Zabelskii, D. V.; Vlasov, A. V.; Ryzhykau, Yu L.; Murugova, T. N.; Brennich, M.; Soloviov, D. V.; Ivankov, O. I.; Borshchevskiy, V. I.; Mishin, A. V.; Rogachev, A. V.; Round, A.; Dencher, N. A.; Büldt, G.; Gordeliy, V. I.; Kuklin, A. I.
2018-03-01
The method of small angle scattering (SAS) is widely used in the field of biophysical research of proteins in aqueous solutions. Obtaining low-resolution structure of proteins is still a highly valuable method despite the advances in high-resolution methods such as X-ray diffraction, cryo-EM etc. SAS offers the unique possibility to obtain structural information under conditions close to those of functional assays, i.e. in solution, without different additives, in the mg/mL concentration range. SAS method has a long history, but there are still many uncertainties related to data treatment. We compared 1D SAS profiles of apoferritin obtained by X-ray diffraction (XRD) and SAS methods. It is shown that SAS curves for X-ray diffraction crystallographic structure of apoferritin differ more significantly than it might be expected due to the resolution of the SAS instrument. Extrapolation to infinite dilution (EID) method does not sufficiently exclude dimerization and oligomerization effects and therefore could not guarantee total absence of dimers account in the final SAS curve. In this study, we show that EID SAXS, EID SANS and SEC-SAXS methods give complementary results and when they are used all together, it allows obtaining the most accurate results and high confidence from SAS data analysis of proteins.
Evaluation of water-quality data and monitoring program for Lake Travis, near Austin, Texas
Rast, Walter; Slade, Raymond M.
1998-01-01
The multiple-comparison tests indicate that, for some constituents, a single sampling site for a constituent or property might adequately characterize the water quality of Lake Travis for that constituent or property. However, multiple sampling sites are required to provide information of sufficient temporal and spatial resolution to accurately evaluate other water-quality constituents for the reservoir. For example, the water-quality data from surface samples and from bottom samples indicate that nutrients (nitrogen, phosphorus) might require additional sampling sites for a more accurate characterization of their in-lake dynamics.
NASA Technical Reports Server (NTRS)
Klassen, Steve; Bugbee, Bruce
2005-01-01
Accurate shortwave radiation data is critical to evapotranspiration (ET) models used for developing irrigation schedules to optimize crop production while saving water, minimizing fertilizer, herbicide, and pesticide applications, reducing soil erosion, and protecting surface and ground water quality. Low cost silicon cell pyranometers have proven to be sufficiently accurate and robust for widespread use in agricultural applications under unobstructed daylight conditions. More expensive thermopile pyranometers are required for use as calibration standards and measurements under light with unique spectral properties (electric lights, under vegetation, in greenhouses and growth chambers). Routine cleaning, leveling, and annual calibration checks will help to ensure the integrity of long-term data.
Neural-Net Processing of Characteristic Patterns From Electronic Holograms of Vibrating Blades
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
1999-01-01
Finite-element-model-trained artificial neural networks can be used to process efficiently the characteristic patterns or mode shapes from electronic holograms of vibrating blades. The models used for routine design may not yet be sufficiently accurate for this application. This document discusses the creation of characteristic patterns; compares model generated and experimental characteristic patterns; and discusses the neural networks that transform the characteristic patterns into strain or damage information. The current potential to adapt electronic holography to spin rigs, wind tunnels and engines provides an incentive to have accurate finite element models lor training neural networks.
A Distributed Ensemble Approach for Mining Healthcare Data under Privacy Constraints
Li, Yan; Bai, Changxin; Reddy, Chandan K.
2015-01-01
In recent years, electronic health records (EHRs) have been widely adapted at many healthcare facilities in an attempt to improve the quality of patient care and increase the productivity and efficiency of healthcare delivery. These EHRs can accurately diagnose diseases if utilized appropriately. While the EHRs can potentially resolve many of the existing problems associated with disease diagnosis, one of the main obstacles in effectively using them is the patient privacy and sensitivity of the medical information available in the EHR. Due to these concerns, even if the EHRs are available for storage and retrieval purposes, sharing of the patient records between different healthcare facilities has become a major concern and has hampered some of the effective advantages of using EHRs. Due to this lack of data sharing, most of the facilities aim at building clinical decision support systems using limited amount of patient data from their own EHR systems to provide important diagnosis related decisions. It becomes quite infeasible for a newly established healthcare facility to build a robust decision making system due to the lack of sufficient patient records. However, to make effective decisions from clinical data, it is indispensable to have large amounts of data to train the decision models. In this regard, there are conflicting objectives of preserving patient privacy and having sufficient data for modeling and decision making. To handle such disparate goals, we develop two adaptive distributed privacy-preserving algorithms based on a distributed ensemble strategy. The basic idea of our approach is to build an elegant model for each participating facility to accurately learn the data distribution, and then can transfer the useful healthcare knowledge acquired on their data from these participators in the form of their own decision models without revealing and sharing the patient-level sensitive data, thus protecting patient privacy. We demonstrate that our approach can successfully build accurate and robust prediction models, under privacy constraints, using the healthcare data collected from different geographical locations. We demonstrate the performance of our method using the Type-2 diabetes EHRs accumulated from multiple sources from all fifty states in the U.S. Our method was evaluated on diagnosing diabetes in the presence of insufficient number of patient records from certain regions without revealing the actual patient data from other regions. Using the proposed approach, we also discovered the important biomarkers, both universal and region-specific, and validated the selected biomarkers using the biomedical literature. PMID:26681811
A Distributed Ensemble Approach for Mining Healthcare Data under Privacy Constraints.
Li, Yan; Bai, Changxin; Reddy, Chandan K
2016-02-10
In recent years, electronic health records (EHRs) have been widely adapted at many healthcare facilities in an attempt to improve the quality of patient care and increase the productivity and efficiency of healthcare delivery. These EHRs can accurately diagnose diseases if utilized appropriately. While the EHRs can potentially resolve many of the existing problems associated with disease diagnosis, one of the main obstacles in effectively using them is the patient privacy and sensitivity of the medical information available in the EHR. Due to these concerns, even if the EHRs are available for storage and retrieval purposes, sharing of the patient records between different healthcare facilities has become a major concern and has hampered some of the effective advantages of using EHRs. Due to this lack of data sharing, most of the facilities aim at building clinical decision support systems using limited amount of patient data from their own EHR systems to provide important diagnosis related decisions. It becomes quite infeasible for a newly established healthcare facility to build a robust decision making system due to the lack of sufficient patient records. However, to make effective decisions from clinical data, it is indispensable to have large amounts of data to train the decision models. In this regard, there are conflicting objectives of preserving patient privacy and having sufficient data for modeling and decision making. To handle such disparate goals, we develop two adaptive distributed privacy-preserving algorithms based on a distributed ensemble strategy. The basic idea of our approach is to build an elegant model for each participating facility to accurately learn the data distribution, and then can transfer the useful healthcare knowledge acquired on their data from these participators in the form of their own decision models without revealing and sharing the patient-level sensitive data, thus protecting patient privacy. We demonstrate that our approach can successfully build accurate and robust prediction models, under privacy constraints, using the healthcare data collected from different geographical locations. We demonstrate the performance of our method using the Type-2 diabetes EHRs accumulated from multiple sources from all fifty states in the U.S. Our method was evaluated on diagnosing diabetes in the presence of insufficient number of patient records from certain regions without revealing the actual patient data from other regions. Using the proposed approach, we also discovered the important biomarkers, both universal and region-specific, and validated the selected biomarkers using the biomedical literature.
2014-01-01
Background Parents often fail to correctly perceive their children’s weight status, but no studies have examined the association between parental weight status perception and longitudinal BMIz change (BMI standardized to a reference population) at various ages. We investigated whether parents are able to accurately perceive their child’s weight status at age 5. We also investigated predictors of accurate weight status perception. Finally, we investigated the predictive value of accurate weight status perception in explaining children’s longitudinal weight development up to the age of 9, in children who were overweight at the age of 5. Methods We used longitudinal data from the KOALA Birth Cohort Study. At the child’s age of 5 years, parents filled out a questionnaire regarding child and parent characteristics and their perception of their child’s weight status. We calculated the children’s actual weight status from parental reports of weight and height at ages 2, 5, 6, 7, 8, and 9 years. Regression analyses were used to identify factors predicting which parents accurately perceived their child’s weight status. Finally, regression analyses were used to predict subsequent longitudinal BMIz change in overweight children. Results Eighty-five percent of the parents of overweight children underestimated their child’s weight status at age 5. The child’s BMIz at age 2 and 5 were significant positive predictors of accurate weight status perception (vs. underestimation) in normal weight and overweight children. Accurate weight status perception was a predictor of higher future BMI in overweight children, corrected for actual BMI at baseline. Conclusions Children of parents who accurately perceived their child’s weight status had a higher BMI over time, probably making it easier for parents to correctly perceive their child’s overweight. Parental awareness of the child’s overweight as such may not be sufficient for subsequent weight management by the parents, implying that parents who recognize their child’s overweight may not be able or willing to adequately manage the overweight. PMID:24678601
Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.
2002-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
A new approximation of Fermi-Dirac integrals of order 1/2 for degenerate semiconductor devices
NASA Astrophysics Data System (ADS)
AlQurashi, Ahmed; Selvakumar, C. R.
2018-06-01
There had been tremendous growth in the field of Integrated circuits (ICs) in the past fifty years. Scaling laws mandated both lateral and vertical dimensions to be reduced and a steady increase in doping densities. Most of the modern semiconductor devices have invariably heavily doped regions where Fermi-Dirac Integrals are required. Several attempts have been devoted to developing analytical approximations for Fermi-Dirac Integrals since numerical computations of Fermi-Dirac Integrals are difficult to use in semiconductor devices, although there are several highly accurate tabulated functions available. Most of these analytical expressions are not sufficiently suitable to be employed in semiconductor device applications due to their poor accuracy, the requirement of complicated calculations, and difficulties in differentiating and integrating. A new approximation has been developed for the Fermi-Dirac integrals of the order 1/2 by using Prony's method and discussed in this paper. The approximation is accurate enough (Mean Absolute Error (MAE) = 0.38%) and easy enough to be used in semiconductor device equations. The new approximation of Fermi-Dirac Integrals is applied to a more generalized Einstein Relation which is an important relation in semiconductor devices.
On a High-Fidelity Hierarchical Approach to Buckling Load Calculations
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
Gauge-origin dependence in electronic g-tensor calculations
NASA Astrophysics Data System (ADS)
Glasbrenner, Michael; Vogler, Sigurd; Ochsenfeld, Christian
2018-06-01
We present a benchmark study on the gauge-origin dependence of the electronic g-tensor using data from unrestricted density functional theory calculations with the spin-orbit mean field ansatz. Our data suggest in accordance with previous studies that g-tensor calculations employing a common gauge-origin are sufficiently accurate for small molecules; however, for extended molecules, the introduced errors can become relevant and significantly exceed the basis set error. Using calculations with the spin-orbit mean field ansatz and gauge-including atomic orbitals as a reference, we furthermore show that the accuracy and reliability of common gauge-origin approaches in larger molecules depends strongly on the locality of the spin density distribution. We propose a new pragmatic ansatz for choosing the gauge-origin which takes the spin density distribution into account and gives reasonably accurate values for molecules with a single localized spin center. For more general cases like molecules with several spatially distant spin centers, common gauge-origin approaches are shown to be insufficient for consistently achieving high accuracy. Therefore the computation of g-tensors using distributed gauge-origin methods like gauge-including atomic orbitals is considered as the ideal approach and is recommended for larger molecular systems.
Multi-Purpose Enrollment Projections: A Comparative Analysis of Four Approaches
ERIC Educational Resources Information Center
Allen, Debra Mary
2013-01-01
Providing support for institutional planning is central to the function of institutional research. Necessary for the planning process are accurate enrollment projections. The purpose of the present study was to develop a short-term enrollment model simple enough to be understood by those who rely on it, yet sufficiently complex to serve varying…
25 CFR 214.13 - Diligence; annual expenditures; mining records.
Code of Federal Regulations, 2013 CFR
2013-04-01
... ores by drilling within 1 year test holes aggregating 2,000 feet unless a sufficient ore body is... of the drill holes, to justify the expenditure, the sinking of a shaft to the ore body, and the... leased premises accurate records of the drilling, redrilling, or deepening of all holes showing the...
25 CFR 214.13 - Diligence; annual expenditures; mining records.
Code of Federal Regulations, 2014 CFR
2014-04-01
... ores by drilling within 1 year test holes aggregating 2,000 feet unless a sufficient ore body is... of the drill holes, to justify the expenditure, the sinking of a shaft to the ore body, and the... leased premises accurate records of the drilling, redrilling, or deepening of all holes showing the...
Bridging the Gap between Designers and Consumers: The Role of Effective and Accurate Personas
ERIC Educational Resources Information Center
Miaskiewicz, Tomasz
2010-01-01
Firms now routinely collect information about the needs of their customers, but this information is not sufficiently considered during product design decisions. This research examines the relationship between designers and consumers to build an understanding of how the consumer should be represented to increase the consumer focus during the…
Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations
NASA Technical Reports Server (NTRS)
Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick
2017-01-01
Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Modeling the absorption spectrum of the permanganate ion in vacuum and in aqueous solution
NASA Astrophysics Data System (ADS)
Olsen, Jógvan Magnus Haugaard; Hedegård, Erik Donovan
The absorption spectrum of the MnO$_{4}$$^{-}$ ion has been a test-bed for quantum-chemical methods over the last decades. Its correct description requires highly-correlated multiconfigurational methods, which are incompatible with the inclusion of finite-temperature and solvent effects due to their high computational demands. Therefore, implicit solvent models are usually employed. Here we show that implicit solvent models are not sufficiently accurate to model the solvent shift of MnO$_{4}$$^{-}$, and we analyze the origins of their failure. We obtain the correct solvent shift for MnO$_{4}$$^{-}$ in aqueous solution by employing the polarizable embedding (PE) model combined with a range-separated complete active space short-range density functional theory method (CAS-srDFT). Finite-temperature effects are taken into account by averaging over structures obtained from ab initio molecular dynamics simulations. The explicit treatment of finite-temperature and solvent effects facilitates the interpretation of the bands in the low-energy region of the MnO$_{4}$$^{-}$ absorption spectrum, whose assignment has been elusive.
Optical System Design for Noncontact, Normal Incidence, THz Imaging of in vivo Human Cornea.
Sung, Shijun; Dabironezare, Shahab; Llombart, Nuria; Selvin, Skyler; Bajwa, Neha; Chantra, Somporn; Nowroozi, Bryan; Garritano, James; Goell, Jacob; Li, Alex; Deng, Sophie X; Brown, Elliott; Grundfest, Warren S; Taylor, Zachary D
2018-01-01
Reflection mode Terahertz (THz) imaging of corneal tissue water content (CTWC) is a proposed method for early, accurate detection and study of corneal diseases. Despite promising results from ex vivo and in vivo cornea studies, interpretation of the reflectivity data is confounded by the contact between corneal tissue and dielectric windows used to flatten the imaging field. Herein, we present an optical design for non-contact THz imaging of cornea. A beam scanning methodology performs angular, normal incidence sweeps of a focused beam over the corneal surface while keeping the source, detector, and patient stationary. A quasioptical analysis method is developed to analyze the theoretical resolution and imaging field intensity profile. These results are compared to the electric field distribution computed with a physical optics analysis code. Imaging experiments validate the optical theories behind the design and suggest that quasioptical methods are sufficient for designing of THz corneal imaging systems. Successful imaging operations support the feasibility of non-contact in vivo imaging. We believe that this optical system design will enable the first, clinically relevant, in vivo exploration of CTWC using THz technology.
Enhanced sequencing coverage with digital droplet multiple displacement amplification
Sidore, Angus M.; Lan, Freeman; Lim, Shaun W.; Abate, Adam R.
2016-01-01
Sequencing small quantities of DNA is important for applications ranging from the assembly of uncultivable microbial genomes to the identification of cancer-associated mutations. To obtain sufficient quantities of DNA for sequencing, the small amount of starting material must be amplified significantly. However, existing methods often yield errors or non-uniform coverage, reducing sequencing data quality. Here, we describe digital droplet multiple displacement amplification, a method that enables massive amplification of low-input material while maintaining sequence accuracy and uniformity. The low-input material is compartmentalized as single molecules in millions of picoliter droplets. Because the molecules are isolated in compartments, they amplify to saturation without competing for resources; this yields uniform representation of all sequences in the final product and, in turn, enhances the quality of the sequence data. We demonstrate the ability to uniformly amplify the genomes of single Escherichia coli cells, comprising just 4.7 fg of starting DNA, and obtain sequencing coverage distributions that rival that of unamplified material. Digital droplet multiple displacement amplification provides a simple and effective method for amplifying minute amounts of DNA for accurate and uniform sequencing. PMID:26704978
Sensitivity of electrospray molecular dynamics simulations to long-range Coulomb interaction models
NASA Astrophysics Data System (ADS)
Mehta, Neil A.; Levin, Deborah A.
2018-03-01
Molecular dynamics (MD) electrospray simulations of 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIM-BF4) ion liquid were performed with the goal of evaluating the influence of long-range Coulomb models on ion emission characteristics. The direct Coulomb (DC), shifted force Coulomb sum (SFCS), and particle-particle particle-mesh (PPPM) long-range Coulomb models were considered in this work. The DC method with a sufficiently large cutoff radius was found to be the most accurate approach for modeling electrosprays, but, it is computationally expensive. The Coulomb potential energy modeled by the DC method in combination with the radial electric fields were found to be necessary to generate the Taylor cone. The differences observed between the SFCS and the DC in terms of predicting the total ion emission suggest that the former should not be used in MD electrospray simulations. Furthermore, the common assumption of domain periodicity was observed to be detrimental to the accuracy of the capillary-based electrospray simulations.
Assessment of computational prediction of tail buffeting
NASA Technical Reports Server (NTRS)
Edwards, John W.
1990-01-01
Assessments of the viability of computational methods and the computer resource requirements for the prediction of tail buffeting are made. Issues involved in the use of Euler and Navier-Stokes equations in modeling vortex-dominated and buffet flows are discussed and the requirement for sufficient grid density to allow accurate, converged calculations is stressed. Areas in need of basic fluid dynamics research are highlighted: vorticity convection, vortex breakdown, dynamic turbulence modeling for free shear layers, unsteady flow separation for moderately swept, rounded leading-edge wings, vortex flows about wings at high subsonic speeds. An estimate of the computer run time for a buffeting response calculation for a full span F-15 aircraft indicates that an improvement in computer and/or algorithm efficiency of three orders of magnitude is needed to enable routine use of such methods. Attention is also drawn to significant uncertainties in the estimates, in particular with regard to nonlinearities contained within the modeling and the question of the repeatability or randomness of buffeting response.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity
NASA Astrophysics Data System (ADS)
Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration
2016-03-01
Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.
ODEion--a software module for structural identification of ordinary differential equations.
Gennemark, Peter; Wedelin, Dag
2014-02-01
In the systems biology field, algorithms for structural identification of ordinary differential equations (ODEs) have mainly focused on fixed model spaces like S-systems and/or on methods that require sufficiently good data so that derivatives can be accurately estimated. There is therefore a lack of methods and software that can handle more general models and realistic data. We present ODEion, a software module for structural identification of ODEs. Main characteristic features of the software are: • The model space is defined by arbitrary user-defined functions that can be nonlinear in both variables and parameters, such as for example chemical rate reactions. • ODEion implements computationally efficient algorithms that have been shown to efficiently handle sparse and noisy data. It can run a range of realistic problems that previously required a supercomputer. • ODEion is easy to use and provides SBML output. We describe the mathematical problem, the ODEion system itself, and provide several examples of how the system can be used. Available at: http://www.odeidentification.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ying, E-mail: liu.ying.48r@st.kyoto-u.ac.jp; Imashuku, Susumu; Sasaki, Nobuharu
In this study, a portable total reflection x-ray fluorescence (TXRF) spectrometer was used to analyze unknown laboratory hazards that precipitated on exterior surfaces of cooling pipes and fume hood pipes in chemical laboratories. With the aim to examine the accuracy of TXRF analysis for the determination of elemental composition, analytical results were compared with those of wavelength-dispersive x-ray fluorescence spectrometry, scanning electron microscope and energy-dispersive x-ray spectrometry, energy-dispersive x-ray fluorescence spectrometry, inductively coupled plasma atomic emission spectrometry, x-ray diffraction spectrometry (XRD), and x-ray photoelectron spectroscopy (XPS). Detailed comparison of data confirmed that the TXRF method itself was not sufficient tomore » determine all the elements (Z > 11) contained in the samples. In addition, results suggest that XRD should be combined with XPS in order to accurately determine compound composition. This study demonstrates that at least two analytical methods should be used in order to analyze the composition of unknown real samples.« less
Kurihara, Miki; Ikeda, Koji; Izawa, Yoshinori; Deguchi, Yoshihiro; Tarui, Hitoshi
2003-10-20
A laser-induced breakdown spectroscopy (LIBS) technique has been applied for detection of unburned carbon in fly ash, and an automated LIBS unit has been developed and applied in a 1000-MW pulverized-coal-fired power plant for real-time measurement, specifically of unburned carbon in fly ash. Good agreement was found between measurement results from the LIBS method and those from the conventional method (Japanese Industrial Standard 8815), with a standard deviation of 0.27%. This result confirms that the measurement of unburned carbon in fly ash by use of LIBS is sufficiently accurate for boiler control. Measurements taken by this apparatus were also integrated into a boiler-control system with the objective of achieving optimal and stable combustion. By control of the rotating speed of a mill rotary separator relative to measured unburned-carbon content, it has been demonstrated that boiler control is possible in an optimized manner by use of the value of the unburned-carbon content of fly ash.
Zhao, Wen; Ma, Hong; Zhang, Hua; Jin, Jiang; Dai, Gang; Hu, Lin
2017-01-01
The cognitive radio wireless sensor network (CR-WSN) is experiencing more and more attention for its capacity to automatically extract broadband instantaneous radio environment information. Obtaining sufficient linearity and spurious-free dynamic range (SFDR) is a significant premise of guaranteeing sensing performance which, however, usually suffers from the nonlinear distortion coming from the broadband radio frequency (RF) front-end in the sensor node. Moreover, unlike other existing methods, the joint effect of non-constant group delay distortion and nonlinear distortion is discussed, and its corresponding solution is provided in this paper. After that, the nonlinearity mitigation architecture based on best delay searching is proposed. Finally, verification experiments, both on simulation signals and signals from real-world measurement, are conducted and discussed. The achieved results demonstrate that with best delay searching, nonlinear distortion can be alleviated significantly and, in this way, spectrum sensing performance is more reliable and accurate. PMID:28956860
Sensitivity of electrospray molecular dynamics simulations to long-range Coulomb interaction models.
Mehta, Neil A; Levin, Deborah A
2018-03-01
Molecular dynamics (MD) electrospray simulations of 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIM-BF_{4}) ion liquid were performed with the goal of evaluating the influence of long-range Coulomb models on ion emission characteristics. The direct Coulomb (DC), shifted force Coulomb sum (SFCS), and particle-particle particle-mesh (PPPM) long-range Coulomb models were considered in this work. The DC method with a sufficiently large cutoff radius was found to be the most accurate approach for modeling electrosprays, but, it is computationally expensive. The Coulomb potential energy modeled by the DC method in combination with the radial electric fields were found to be necessary to generate the Taylor cone. The differences observed between the SFCS and the DC in terms of predicting the total ion emission suggest that the former should not be used in MD electrospray simulations. Furthermore, the common assumption of domain periodicity was observed to be detrimental to the accuracy of the capillary-based electrospray simulations.
Quality control of plant food supplements.
Sanzini, Elisabetta; Badea, Mihaela; Santos, Ariana Dos; Restani, Patrizia; Sievers, Hartwig
2011-12-01
It is essential to guarantee the safety of unprocessed plants and food supplements if consumers' health is to be protected. Although botanicals and their preparations are regulated at EU level, at least in part, there is still considerable discretion at national level, and Member States may choose to classify a product either as a food supplement or as a drug. Accurate data concerning the finished products and the plant used as the starting point are of major importance if risks and safety are to be properly assessed, but in addition standardized criteria for herbal preparation must be laid down and respected by researchers and manufacturers. Physiologically active as well as potentially toxic constituents need to be identified, and suitable analytical methods for their measurement specified, particularly in view of the increasing incidence of economically motivated adulteration of herbal raw materials and extracts. It remains the duty of food operators to keep up with the scientific literature and to provide sufficient information to enable the adaptation of specifications, sampling schemes and analytical methods to a fast-changing environment.
General probability-matched relations between radar reflectivity and rain rate
NASA Technical Reports Server (NTRS)
Rosenfeld, Daniel; Wolff, David B.; Atlas, David
1993-01-01
An improved method for transforming radar-observed reflectivities Ze into rain rate R is presented. The method is based on a formulation of a Ze-R function constrained such that (1) the radar-retrieved pdf of R and all of its moments are identical to those determined from the gauges over a sufficiently large domain, and (2) the fraction of the time that it is raining above a low but still has an accurately measurable rain intensity is identical for both the radar and for simultaneous measurements of collocated gauges on average. Data measured by a 1.65-deg beamwidth C-band radar and 22 gauges located in the vicinity of Darwin, Australia, are used. The resultant Ze-R functions show a strong range dependence, especially for the rain regimes characterized by strong reflectivity gradients and substantial attenuation. The application of these novel Ze-R functions to the radar data produces excellent matches to the gauge measurements without any systematic bias.
Real-time depth camera tracking with geometrically stable weight algorithm
NASA Astrophysics Data System (ADS)
Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming
2017-03-01
We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.
Instillation and Fixation Methods Useful in Mouse Lung Cancer Research.
Limjunyawong, Nathachit; Mock, Jason; Mitzner, Wayne
2015-08-31
The ability to instill live agents, cells, or chemicals directly into the lung without injuring or killing the mice is an important tool in lung cancer research. Although there are a number of methods that have been published showing how to intubate mice for pulmonary function measurements, none are without potential problems for rapid tracheal instillation in large cohorts of mice. In the present paper, a simple and quick method is described that enables an investigator to carry out such instillations in an efficient manner. The method does not require any special tools or lighting and can be learned with very little practice. It involves anesthetizing a mouse, making a small incision in the neck to visualize the trachea, and then inserting an intravenous catheter directly. The small incision is quickly closed with tissue adhesive, and the mice are allowed to recover. A skilled student or technician can do instillations at an average rate of 2 min/mouse. Once the cancer is established, there is frequently a need for quantitative histologic analysis of the lungs. Traditionally pathologists usually do not bother to standardize lung inflation during fixation, and analyses are often based on a scoring system that can be quite subjective. While this may sometime be sufficiently adequate for gross estimates of the size of a lung tumor, any proper stereological quantification of lung structure or cells requires a reproducible fixation procedure and subsequent lung volume measurement. Here we describe simple reliable procedures for both fixing the lungs under pressure and then accurately measuring the fixed lung volume. The only requirement is a laboratory balance that is accurate over a range of 1 mg-300 g. The procedures presented here thus could greatly improve the ability to create, treat, and analyze lung cancers in mice.
Identification of medically relevant Nocardia species with an abbreviated battery of tests.
Kiska, Deanna L; Hicks, Karen; Pettit, David J
2002-04-01
Identification of Nocardia to the species level is useful for predicting antimicrobial susceptibility patterns and defining the pathogenicity and geographic distribution of these organisms. We sought to develop an identification method which was accurate, timely, and employed tests which would be readily available in most clinical laboratories. We evaluated the API 20C AUX yeast identification system as well as several biochemical tests and Kirby-Bauer susceptibility patterns for the identification of 75 isolates encompassing the 8 medically relevant Nocardia species. There were few biochemical reactions that were sufficiently unique for species identification; of note, N. nova were positive for arylsulfatase, N. farcinica were positive for opacification of Middlebrook 7H11 agar, and N. brasiliensis and N. pseudobrasiliensis were the only species capable of liquefying gelatin. API 20C sugar assimilation patterns were unique for N. transvalensis, N. asteroides IV, and N. brevicatena. There was overlap among the assimilation patterns for the other species. Species-specific patterns of susceptibility to gentamicin, tobramycin, amikacin, and erythromycin were obtained for N. nova, N. farcinica, and N. brevicatena, while there was overlap among the susceptibility patterns for the other isolates. No single method could identify all Nocardia isolates to the species level; therefore, a combination of methods was necessary. An algorithm utilizing antibiotic susceptibility patterns, citrate utilization, acetamide utilization, and assimilation of inositol and adonitol accurately identified all isolates. The algorithm was expanded to include infrequent drug susceptibility patterns which have been reported in the literature but which were not seen in this study.
Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim
2012-01-01
Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.
Guidance for laboratories performing molecular pathology for cancer patients.
Cree, Ian A; Deans, Zandra; Ligtenberg, Marjolijn J L; Normanno, Nicola; Edsjö, Anders; Rouleau, Etienne; Solé, Francesc; Thunnissen, Erik; Timens, Wim; Schuuring, Ed; Dequeker, Elisabeth; Murray, Samuel; Dietel, Manfred; Groenen, Patricia; Van Krieken, J Han
2014-11-01
Molecular testing is becoming an important part of the diagnosis of any patient with cancer. The challenge to laboratories is to meet this need, using reliable methods and processes to ensure that patients receive a timely and accurate report on which their treatment will be based. The aim of this paper is to provide minimum requirements for the management of molecular pathology laboratories. This general guidance should be augmented by the specific guidance available for different tumour types and tests. Preanalytical considerations are important, and careful consideration of the way in which specimens are obtained and reach the laboratory is necessary. Sample receipt and handling follow standard operating procedures, but some alterations may be necessary if molecular testing is to be performed, for instance to control tissue fixation. DNA and RNA extraction can be standardised and should be checked for quality and quantity of output on a regular basis. The choice of analytical method(s) depends on clinical requirements, desired turnaround time, and expertise available. Internal quality control, regular internal audit of the whole testing process, laboratory accreditation, and continual participation in external quality assessment schemes are prerequisites for delivery of a reliable service. A molecular pathology report should accurately convey the information the clinician needs to treat the patient with sufficient information to allow for correct interpretation of the result. Molecular pathology is developing rapidly, and further detailed evidence-based recommendations are required for many of the topics covered here. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Electronically excited and ionized states in condensed phase: Theory and applications
NASA Astrophysics Data System (ADS)
Sadybekov, Arman
Predictive modeling of chemical processes in silico is a goal of XXI century. While robust and accurate methods exist for ground-state properties, reliable methods for excited states are still lacking and require further development. Electronically exited states are formed by interactions of matter with light and are responsible for key processes in solar energy harvesting, vision, artificial sensors, and photovoltaic applications. The greatest challenge to overcome on our way to a quantitative description of light-induced processes is accurate inclusion of the effect of the environment on excited states. All above mentioned processes occur in solution or solid state. Yet, there are few methodologies to study excited states in condensed phase. Application of highly accurate and robust methods, such as equation-of-motion coupled-cluster theory EOM-CC, is limited by a high computational cost and scaling precluding full quantum mechanical treatment of the entire system. In this thesis we present successful application of the EOM-CC family of methods to studies of excited states in liquid phase and build hierarchy of models for inclusion of the solvent effects. In the first part of the thesis we show that a simple gasphase model is sufficient to quantitatively analyze excited states in liquid benzene, while the latter part emphasizes the importance of explicit treatment of the solvent molecules in the case of glycine in water solution. In chapter 2, we use a simple dimer model to describe exciton formation in liquid and solid benzene. We show that sampling of dimer structures extracted from the liquid benzene is sufficient to correctly predict exited-state properties of the liquid. Our calculations explain experimentally observed features, which helped to understand the mechanism of the excimer formation in liquid benzene. Furthermore, we shed light on the difference between dimer configurations in the first solvation shell of liquid benzene and in unit cell of solid benzene and discussed the impact of these differences on the formation of the excimer state. In chapter 3, we present a theoretical approach for calculating core-level states in condensed phase. The approach is based on EOM-CC and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we addressed poor convergence issues that are encountered for the core-level states and significantly reduced computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to the reference systems, are reproduced reasonably well. By using different protonation forms of solvated glycine as a benchmark system, we showed that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy. In chapter 4, we outline future directions and discuss possible applications of the developed computational protocol for prediction of core chemical shifts in larger systems.
Wide range radioactive gas concentration detector
Anderson, David F.
1984-01-01
A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
Simultaneous head tissue conductivity and EEG source location estimation.
Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott
2016-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.
Propellant Chemistry for CFD Applications
NASA Technical Reports Server (NTRS)
Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.
1996-01-01
Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.
Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen
2012-01-01
Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions. PMID:22952972
Simultaneous head tissue conductivity and EEG source location estimation
Acar, Can E.; Makeig, Scott
2015-01-01
Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675
Read clouds uncover variation in complex regions of the human genome.
Bishara, Alex; Liu, Yuling; Weng, Ziming; Kashef-Haghighi, Dorna; Newburger, Daniel E; West, Robert; Sidow, Arend; Batzoglou, Serafim
2015-10-01
Although an increasing amount of human genetic variation is being identified and recorded, determining variants within repeated sequences of the human genome remains a challenge. Most population and genome-wide association studies have therefore been unable to consider variation in these regions. Core to the problem is the lack of a sequencing technology that produces reads with sufficient length and accuracy to enable unique mapping. Here, we present a novel methodology of using read clouds, obtained by accurate short-read sequencing of DNA derived from long fragment libraries, to confidently align short reads within repeat regions and enable accurate variant discovery. Our novel algorithm, Random Field Aligner (RFA), captures the relationships among the short reads governed by the long read process via a Markov Random Field. We utilized a modified version of the Illumina TruSeq synthetic long-read protocol, which yielded shallow-sequenced read clouds. We test RFA through extensive simulations and apply it to discover variants on the NA12878 human sample, for which shallow TruSeq read cloud sequencing data are available, and on an invasive breast carcinoma genome that we sequenced using the same method. We demonstrate that RFA facilitates accurate recovery of variation in 155 Mb of the human genome, including 94% of 67 Mb of segmental duplication sequence and 96% of 11 Mb of transcribed sequence, that are currently hidden from short-read technologies. © 2015 Bishara et al.; Published by Cold Spring Harbor Laboratory Press.
Ojanperä, Ilkka; Kolmonen, Marjo; Pelander, Anna
2012-05-01
Clinical and forensic toxicology and doping control deal with hundreds or thousands of drugs that may cause poisoning or are abused, are illicit, or are prohibited in sports. Rapid and reliable screening for all these compounds of different chemical and pharmaceutical nature, preferably in a single analytical method, is a substantial effort for analytical toxicologists. Combined chromatography-mass spectrometry techniques with standardised reference libraries have been most commonly used for the purpose. In the last ten years, the focus has shifted from gas chromatography-mass spectrometry to liquid chromatography-mass spectrometry, because of progress in instrument technology and partly because of the polarity and low volatility of many new relevant substances. High-resolution mass spectrometry (HRMS), which enables accurate mass measurement at high resolving power, has recently evolved to the stage that is rapidly causing a shift from unit-resolution, quadrupole-dominated instrumentation. The main HRMS techniques today are time-of-flight mass spectrometry and Orbitrap Fourier-transform mass spectrometry. Both techniques enable a range of different drug-screening strategies that essentially rely on measuring a compound's or a fragment's mass with sufficiently high accuracy that its elemental composition can be determined directly. Accurate mass and isotopic pattern acts as a filter for confirming the identity of a compound or even identification of an unknown. High mass resolution is essential for improving confidence in accurate mass results in the analysis of complex biological samples. This review discusses recent applications of HRMS in analytical toxicology.
Estimation of absolute solvent and solvation shell entropies via permutation reduction
NASA Astrophysics Data System (ADS)
Reinhard, Friedemann; Grubmüller, Helmut
2007-01-01
Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.
Monaural room acoustic parameters from music and speech.
Kendrick, Paul; Cox, Trevor J; Li, Francis F; Zhang, Yonggang; Chambers, Jonathon A
2008-07-01
This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-01-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.
Vision-based localization of the center of mass of large space debris via statistical shape analysis
NASA Astrophysics Data System (ADS)
Biondi, G.; Mauro, S.; Pastorelli, S.
2017-08-01
The current overpopulation of artificial objects orbiting the Earth has increased the interest of the space agencies on planning missions for de-orbiting the largest inoperative satellites. Since this kind of operations involves the capture of the debris, the accurate knowledge of the position of their center of mass is a fundamental safety requirement. As ground observations are not sufficient to reach the required accuracy level, this information should be acquired in situ just before any contact between the chaser and the target. Some estimation methods in the literature rely on the usage of stereo cameras for tracking several features of the target surface. The actual positions of these features are estimated together with the location of the center of mass by state observers. The principal drawback of these methods is related to possible sudden disappearances of one or more features from the field of view of the cameras. An alternative method based on 3D Kinematic registration is presented in this paper. The method, which does not suffer of the mentioned drawback, considers a preliminary reduction of the inaccuracies in detecting features by the usage of statistical shape analysis.
Motion compensation in digital subtraction angiography using graphics hardware.
Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim
2006-07-01
An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.
NASA Astrophysics Data System (ADS)
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-10-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Ojanperä, Suvi; Rasanen, Ilpo; Sistonen, Johanna; Pelander, Anna; Vuori, Erkki; Ojanperä, Ilkka
2007-08-01
Lack of availability of reference standards for drug metabolites, newly released drugs, and illicit drugs hinders the analysis of these substances in biologic samples. To counter this problem, an approach is presented here for quantitative drug analysis in plasma without primary reference standards by liquid chromatography-chemiluminescence nitrogen detection (LC-CLND). To demonstrate the feasibility of the method, metabolic ratios of the opioid drug tramadol were determined in the setting of a pharmacogenetic study. Four volunteers were given a single 100-mg oral dose of tramadol, and a blood sample was collected from each subject 1 hour later. Tramadol, O-desmethyltramadol, and nortramadol were determined in plasma by LC-CLND without reference standards and by a gas chromatography-mass spectrometry reference method. In contrast to previous CLND studies lacking an extraction step, a liquid-liquid extraction system was created for 5-mL plasma samples using n-butyl chloride-isopropyl alcohol (98 + 2) at pH 10. Extraction recovery estimation was based on model compounds chosen according to their similar physicochemical characteristics (retention time, pKa, logD). Instrument calibration was performed with a single secondary standard (caffeine) using the equimolar response of the detector to nitrogen. The mean differences between the results of the LC-CLND and gas chromatography-mass spectrometry methods for tramadol, O-desmethyltramadol, and nortramadol were 8%, 32%, and 19%, respectively. The sensitivity of LC-CLND was sufficient for therapeutic concentrations of tramadol and metabolites. A good correlation was obtained between genotype, expressed by the number of functional genes, and the plasma metabolite ratios. This experiment suggests that a recovery-corrected LC-CLND analysis produces sufficiently accurate results to be useful in a clinical context, particularly in instances in which reference standards are not readily accessible.
Model parameter learning using Kullback-Leibler divergence
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan
2018-02-01
In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.
Analysis and design of planar and non-planar wings for induced drag minimization
NASA Technical Reports Server (NTRS)
Mortara, K.; Straussfogel, Dennis M.; Maughmer, Mark D.
1991-01-01
The goal of the work was to develop and validate computational tools to be used for the design of planar and non-planar wing geometries for minimum induced drag. Because of the iterative nature of the design problem, it is important that, in addition to being sufficiently accurate for the problem at hand, they are reasonably fast and computationally efficient. Toward this end, a method of predicting induced drag in the presence of a non-rigid wake is coupled with a panel method. The induced drag prediction technique is based on the Kutta-Joukowski law applied at the trailing edge. Until recently, the use of this method has not been fully explored and pressure integration and Trefftz-plane calculations favored. As is shown in this report, however, the Kutta-Joukowski method is able to give better results for a given amount of effort than the more common techniques, particularly when relaxed wakes and non-planar wing geometries are considered. Using these tools, a workable design method is in place which takes into account relaxed wakes and non-planar wing geometries. It is recommended that this method be used to design a wind-tunnel experiment to verify the predicted aerodynamic benefits of non-planar wing geometries.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Theory study on the bandgap of antimonide-based multi-element alloys
NASA Astrophysics Data System (ADS)
An, Ning; Liu, Cheng-Zhi; Fan, Cun-Bo; Dong, Xue; Song, Qing-Li
2017-05-01
In order to meet the design requirements of the high-performance antimonide-based optoelectronic devices, the spin-orbit splitting correction method for bandgaps of Sb-based multi-element alloys is proposed. Based on the analysis of band structure, a correction factor is introduced in the InxGa1-xAsySb1-y bandgaps calculation with taking into account the spin-orbit coupling sufficiently. In addition, the InxGa1-xAsySb1-y films with different compositions are grown on GaSb substrates by molecular beam epitaxy (MBE), and the corresponding bandgaps are obtained by photoluminescence (PL) to test the accuracy and reliability of this new method. The results show that the calculated values agree fairly well with the experimental results. To further verify this new method, the bandgaps of a series of experimental samples reported before are calculated. The error rate analysis reveals that the α of spin-orbit splitting correction method is decreased to 2%, almost one order of magnitude smaller than the common method. It means this new method can calculate the antimonide multi-element more accurately and has the merit of wide applicability. This work can give a reasonable interpretation for the reported results and beneficial to tailor the antimonides properties and optoelectronic devices.
Finite temperature properties of clusters by replica exchange metadynamics: the water nonamer.
Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xin-Gao
2011-03-02
We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. On the basis of a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic, or especially configurational, the latter often forgotten in many cluster studies, are automatically included. For the present demonstration, we choose the water nonamer (H(2)O)(9), an extremely simple cluster, which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.
Finite Temperature Properties of Clusters by Replica Exchange Metadynamics: The Water Nonamer
NASA Astrophysics Data System (ADS)
Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xingao
2012-02-01
We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. Based on a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic and especially configurational -- the latter often forgotten in many cluster studies -- are automatically included. For the present demonstration we choose the water nonamer (H2O)9, an extremely simple cluster which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.
Towards Bridging the Gaps in Holistic Transition Prediction via Numerical Simulations
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Li, Fei; Duan, Lian; Chang, Chau-Lyan; Carpenter, Mark H.; Streett, Craig L.; Malik, Mujeeb R.
2013-01-01
The economic and environmental benefits of laminar flow technology via reduced fuel burn of subsonic and supersonic aircraft cannot be realized without minimizing the uncertainty in drag prediction in general and transition prediction in particular. Transition research under NASA's Aeronautical Sciences Project seeks to develop a validated set of variable fidelity prediction tools with known strengths and limitations, so as to enable "sufficiently" accurate transition prediction and practical transition control for future vehicle concepts. This paper provides a summary of selected research activities targeting the current gaps in high-fidelity transition prediction, specifically those related to the receptivity and laminar breakdown phases of crossflow induced transition in a subsonic swept-wing boundary layer. The results of direct numerical simulations are used to obtain an enhanced understanding of the laminar breakdown region as well as to validate reduced order prediction methods.
Analysis of pressure buildups taken from fluid level data - Tyler sands, central Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, D.F.
Pressure buildups taken by fluid level recording prove to be quite usable for formation evaluation in the Tyler sands of central Montana. This method provides low cost information with surprising accuracy. The procedures followed in obtaining the data, and the precautions taken in assuring the validity of the data are discussed. The data proved sufficiently accurate to perform engineering calculations in 2 separate Tyler fields. The calculations aided in determination of reservoir parameters, and in one field provided justification for additional development drilling. In another field, the data substantiated the limited reservoir, and development drilling plans were cancelled. The buildupmore » curves illustrated well-bore damage in some of the wells and subsequent stimulation of 2 wells resulted in sustained 6-fold and 9-fold increases in producing rates of these wells.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
... particular area if they believe that the present limit does not accurately reflect the higher sales prices in that area. Any request for an increase must be accompanied by sufficient housing sales price data to justify higher limits. Typically, this data includes housing sales data extracted from multiple listing...
26 CFR 1.964-3 - Records to be provided by United States shareholders.
Code of Federal Regulations, 2010 CFR
2010-04-01
... books of account or records as are sufficient to satisfy the requirements of section 6001 and section 964(c), or true copies thereof, as are reasonably demanded, and (2) If such books or records are not maintained in the English language, either (i) an accurate English translation of such books or records or...
ERIC Educational Resources Information Center
Weddle, Sarah A.; Spencer, Trina D.; Kajian, Mandana; Petersen, Douglas B.
2016-01-01
A disproportionate percentage of culturally and linguistically diverse students have difficulties with language-related skills that affect their academic success. Early and intensive language instruction may greatly improve these students' language skills, yet there is not sufficient research available to assist educators and school psychologists…
ERIC Educational Resources Information Center
Mouzakitis, Angela; Codding, Robin S.; Tryon, Georgiana
2015-01-01
Accurate implementation of individualized behavior intervention plans (BIPs) is a critical aspect of evidence-based practice. Research demonstrates that neither training nor consultation is sufficient to improve and maintain high rates of treatment integrity (TI). Therefore, evaluation of ongoing support strategies is needed. The purpose of this…
Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?
ERIC Educational Resources Information Center
Gregg, Brent A.; Sawyer, Jean
2015-01-01
The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…
Improving depth estimation from a plenoptic camera by patterned illumination
NASA Astrophysics Data System (ADS)
Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.
2015-05-01
Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
Design of magnetic Circuit Simulation for Curing Device of Anisotropic MRE
NASA Astrophysics Data System (ADS)
Hapipi, N.; Ubaidillah; Mazlan, S. A.; Widodo, P. J.
2018-03-01
The strength of magnetic field during fabrication of magnetorheological elastomer (MRE) plays a crucial role in order to form a pre-structured MRE. So far, gaussmeter were used to determine the magnetic intensity subjected to the MRE during curing. However, the magnetic flux reading through that measurement considered less accurate. Therefore, a simulation should be done to figure out the magnetic flux concentration around the sample. This paper investigates the simulation of magnetic field distribution in a curing device used during curing stage of anisotropic magnetorheological elastomer (MRE). The target in designing the magnetic circuit is to ensure a sufficient and uniform magnetic field to all the MRE surfaces during the curing process. The magnetic circuit design for the curing device was performed using Finite Element Method Magnetic (FEMM) to examine the magnetic flux density distribution in the device. The material selection was first done instantaneously during a magnetic simulation process. Then, the experimental validation of simulation was performed by measuring and comparing the actual flux generated within the specimen type and the one from the FEMM simulation. İt apparent that the data from FEMM simulation shows an agreement with the actual measurement. Furthermore, the FEMM results showed that the magnetic design is able to provide sufficient and uniform magnetic field all over the surfaces of the MRE.
Aggregation Trade Offs in Family Based Recommendations
NASA Astrophysics Data System (ADS)
Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac
Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.
Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.
Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430
NASA Astrophysics Data System (ADS)
Mukhanov, V. F.
2016-10-01
In March 2013, following an accurate processing of available measurement data, the Planck Scientific Collaboration published the highest-resolution photograph ever of the early Universe when it was only a few hundred thousand years old. The photograph showed galactic seeds in sufficient detail to test some nontrivial theoretical predictions made more than thirty years ago. Most amazing was that all predictions were confirmed to be remarkably accurate. With no exaggeration, we may consider it established experimentally that quantum physics, which is normally assumed to be relevant on the atomic and subatomic scale, also works on the scale of the entire Universe, determining its structure with all its galaxies, stars, and planets.
Tang, Yat T; Marshall, Garland R
2011-02-28
Binding affinity prediction is one of the most critical components to computer-aided structure-based drug design. Despite advances in first-principle methods for predicting binding affinity, empirical scoring functions that are fast and only relatively accurate are still widely used in structure-based drug design. With the increasing availability of X-ray crystallographic structures in the Protein Data Bank and continuing application of biophysical methods such as isothermal titration calorimetry to measure thermodynamic parameters contributing to binding free energy, sufficient experimental data exists that scoring functions can now be derived by separating enthalpic (ΔH) and entropic (TΔS) contributions to binding free energy (ΔG). PHOENIX, a scoring function to predict binding affinities of protein-ligand complexes, utilizes the increasing availability of experimental data to improve binding affinity predictions by the following: model training and testing using high-resolution crystallographic data to minimize structural noise, independent models of enthalpic and entropic contributions fitted to thermodynamic parameters assumed to be thermodynamically biased to calculate binding free energy, use of shape and volume descriptors to better capture entropic contributions. A set of 42 descriptors and 112 protein-ligand complexes were used to derive functions using partial least-squares for change of enthalpy (ΔH) and change of entropy (TΔS) to calculate change of binding free energy (ΔG), resulting in a predictive r2 (r(pred)2) of 0.55 and a standard error (SE) of 1.34 kcal/mol. External validation using the 2009 version of the PDBbind "refined set" (n = 1612) resulted in a Pearson correlation coefficient (R(p)) of 0.575 and a mean error (ME) of 1.41 pK(d). Enthalpy and entropy predictions were of limited accuracy individually. However, their difference resulted in a relatively accurate binding free energy. While the development of an accurate and applicable scoring function was an objective of this study, the main focus was evaluation of the use of high-resolution X-ray crystal structures with high-quality thermodynamic parameters from isothermal titration calorimetry for scoring function development. With the increasing application of structure-based methods in molecular design, this study suggests that using high-resolution crystal structures, separating enthalpy and entropy contributions to binding free energy, and including descriptors to better capture entropic contributions may prove to be effective strategies toward rapid and accurate calculation of binding affinity.
Intrawellbore kinematic and frictional losses in a horizontal well in a bounded confined aquifer
NASA Astrophysics Data System (ADS)
Wang, Quanrong; Zhan, Hongbin
2017-01-01
Horizontal drilling has become an appealing technology for water resource exploration or aquifer remediation in recent decades, due to decreasing operational cost and many technical advantages over vertical wells. However, many previous studies on flow into horizontal wells were based on the Uniform Flux Boundary Condition (UFBC), which does not reflect the physical processes of flow inside the well accurately. In this study, we investigated transient flow into a horizontal well in an anisotropic confined aquifer laterally bounded by two constant-head boundaries. Three types of boundary conditions were employed to treat the horizontal well, including UFBC, Uniform-Head Boundary Condition (UHBC), and Mixed-Type Boundary Condition (MTBC). The MTBC model considered both kinematic and frictional effects inside the horizontal well, in which the kinematic effect referred to the accelerational and fluid-inflow effects. A new solution of UFBC was derived by superimposing the point sink/source solutions along the axis of a horizontal well with a uniform flux distribution. New solutions of UHBC and MTBC were obtained by a hybrid analytical-numerical method, and an iterative method was proposed to determine the well discretization required for achieving sufficiently accurate results. This study showed that the differences among the UFBC, UHBC, and MTBC solutions were obvious near the well screen, decreased with distance from the well, and became negligible near the constant-head boundary. The relationship between the flow rate and the drawdown was nonlinear for the MTBC solution, while it was linear for the UFBC and UHBC solutions.
NASA Astrophysics Data System (ADS)
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-01
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
A HYBRID MODE MODEL OF THE BLAZHKO EFFECT, SHOWN TO ACCURATELY FIT KEPLER DATA FOR RR Lyr
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, Paul H., E-mail: pbryant@ucsd.edu
2014-03-01
The waveform for Blazhko stars can be substantially different during the ascending and descending parts of the Blazhko cycle. A hybrid model, consisting of two component oscillators of the same frequency, is proposed as a means to fit the data over the entire cycle. One component exhibits a sawtooth-like velocity waveform while the other is nearly sinusoidal. One method of generating such a hybrid is presented: a nonlinear model is developed for the first overtone mode, which, if excited to large amplitude, is found to drop strongly in frequency and become highly non-sinusoidal. If the frequency drops sufficiently to becomemore » equal to the fundamental frequency, the two can become phase locked and form the desired hybrid. A relationship is assumed between the hybrid mode velocity and the observed light curve, which is approximated as a power series. An accurate fit of the hybrid model is made to actual Kepler data for RR Lyr. The sinusoidal component may tend to stabilize the period of the hybrid which is found in real Blazhko data to be extremely stable. It is proposed that the variations in amplitude and phase might result from a nonlinear interaction with a third mode, possibly a nonradial mode at 3/2 the fundamental frequency. The hybrid model also applies to non-Blazhko RRab stars and provides an explanation for the light curve bump. A method to estimate the surface gravity is also proposed.« less
Multiple scattering theory for total skin electron beam design.
Antolak, J A; Hogstrom, K R
1998-06-01
The purpose of this manuscript is to describe a method for designing a broad beam of electrons suitable for total skin electron irradiation (TSEI). A theoretical model of a TSEI beam from a linear accelerator with a dual scattering system has been developed. The model uses Fermi-Eyges theory to predict the planar fluence of the electron beam after it has passed through various materials between the source and the treatment plane, which includes scattering foils, monitor chamber, air, and a plastic diffusing plate. Unique to this model is its accounting for removal of the tails of the electron beam profile as it passes through the primary x-ray jaws. A method for calculating the planar fluence profile for an obliquely incident beam is also described. Off-axis beam profiles and percentage depth doses are measured with ion chambers, film, and thermoluminescent dosimeters (TLD). The measured data show that the theoretical model can accurately predict beam energy and planar fluence of the electron beam at normal and oblique incidence. The agreement at oblique angles is not quite as good but is sufficiently accurate to be of predictive value when deciding on the optimal angles for the clinical TSEI beams. The advantage of our calculational approach for designing a TSEI beam is that many different beam configurations can be tested without having to perform time-consuming measurements. Suboptimal configurations can be quickly dismissed, and the predicted optimal solution should be very close to satisfying the clinical specifications.
Cochlear compression: perceptual measures and implications for normal and impaired hearing.
Oxenham, Andrew J; Bacon, Sid P
2003-10-01
This article provides a review of recent developments in our understanding of how cochlear nonlinearity affects sound perception and how a loss of the nonlinearity associated with cochlear hearing impairment changes the way sounds are perceived. The response of the healthy mammalian basilar membrane (BM) to sound is sharply tuned, highly nonlinear, and compressive. Damage to the outer hair cells (OHCs) results in changes to all three attributes: in the case of total OHC loss, the response of the BM becomes broadly tuned and linear. Many of the differences in auditory perception and performance between normal-hearing and hearing-impaired listeners can be explained in terms of these changes in BM response. Effects that can be accounted for in this way include poorer audiometric thresholds, loudness recruitment, reduced frequency selectivity, and changes in apparent temporal processing. All these effects can influence the ability of hearing-impaired listeners to perceive speech, especially in complex acoustic backgrounds. A number of behavioral methods have been proposed to estimate cochlear nonlinearity in individual listeners. By separating the effects of cochlear nonlinearity from other aspects of hearing impairment, such methods may contribute towards identifying the different physiological mechanisms responsible for hearing loss in individual patients. This in turn may lead to more accurate diagnoses and more effective hearing-aid fitting for individual patients. A remaining challenge is to devise a behavioral measure that is sufficiently accurate and efficient to be used in a clinical setting.
Yoshihara, Motojiro; Yoshihara, Motoyuki
In this article, we describe an incorrect use of logic which involves the careless application of the 'necessary and sufficient' condition originally used in formal logic. This logical fallacy is causing frequent confusion in current biology, especially in neuroscience. In order to clarify this problem, we first dissect the structure of this incorrect logic (which we refer to as 'misapplied-N&S') to show how necessity and sufficiency in misapplied-N&S are not matching each other. Potential pitfalls of utilizing misapplied-N&S are exemplified by cases such as the discrediting of command neurons and other potentially key neurons, the distorting of truth in optogenetic studies, and the wrongful justification of studies with little meaning. In particular, the use of the word 'sufficient' in optogenetics tends to generate misunderstandings by opening up multiple interpretations. To avoid the confusion caused by the misleading logic, we now recommend using 'indispensable and inducing' instead of using 'necessary and sufficient.' However, we ultimately recommend fully articulating the limits of what our experiments suggest, not relying on such simple phrases. Only after this problem is fully understood and more rigorous language is demanded, can we finally interpret experimental results in an accurate way.
Delatour, Vincent; Lalere, Beatrice; Saint-Albin, Karène; Peignaux, Maryline; Hattchouel, Jean-Marc; Dumont, Gilles; De Graeve, Jacques; Vaslin-Reimann, Sophie; Gillery, Philippe
2012-11-20
The reliability of biological tests is a major issue for patient care in terms of public health that involves high economic stakes. Reference methods, as well as regular external quality assessment schemes (EQAS), are needed to monitor the analytical performance of field methods. However, control material commutability is a major concern to assess method accuracy. To overcome material non-commutability, we investigated the possibility of using lyophilized serum samples together with a limited number of frozen serum samples to assign matrix-corrected target values, taking the example of glucose assays. Trueness of the current glucose assays was first measured against a primary reference method by using human frozen sera. Methods using hexokinase and glucose oxidase with spectroreflectometric detection proved very accurate, with bias ranging between -2.2% and +2.3%. Bias of methods using glucose oxidase with spectrophotometric detection was +4.5%. Matrix-related bias of the lyophilized materials was then determined and ranged from +2.5% to -14.4%. Matrix-corrected target values were assigned and used to assess trueness of 22 sub-peer groups. We demonstrated that matrix-corrected target values can be a valuable tool to assess field method accuracy in large scale surveys where commutable materials are not available in sufficient amount with acceptable costs. Copyright © 2012 Elsevier B.V. All rights reserved.
Bache, Steven T.; Juang, Titania; Belley, Matthew D.; Koontz, Bridget F.; Adamovics, John; Yoshizumi, Terry T.; Kirsch, David G.; Oldham, Mark
2015-01-01
Purpose: Sophisticated small animal irradiators, incorporating cone-beam-CT image-guidance, have recently been developed which enable exploration of the efficacy of advanced radiation treatments in the preclinical setting. Microstereotactic-body-radiation-therapy (microSBRT) is one technique of interest, utilizing field sizes in the range of 1–15 mm. Verification of the accuracy of microSBRT treatment delivery is challenging due to the lack of available methods to comprehensively measure dose distributions in representative phantoms with sufficiently high spatial resolution and in 3 dimensions (3D). This work introduces a potential solution in the form of anatomically accurate rodent-morphic 3D dosimeters compatible with ultrahigh resolution (0.3 mm3) optical computed tomography (optical-CT) dose read-out. Methods: Rodent-morphic dosimeters were produced by 3D-printing molds of rodent anatomy directly from contours defined on x-ray CT data sets of rats and mice, and using these molds to create tissue-equivalent radiochromic 3D dosimeters from Presage. Anatomically accurate spines were incorporated into some dosimeters, by first 3D printing the spine mold, then forming a high-Z bone equivalent spine insert. This spine insert was then set inside the tissue equivalent body mold. The high-Z spinal insert enabled representative cone-beam CT IGRT targeting. On irradiation, a linear radiochromic change in optical-density occurs in the dosimeter, which is proportional to absorbed dose, and was read out using optical-CT in high-resolution (0.5 mm isotropic voxels). Optical-CT data were converted to absolute dose in two ways: (i) using a calibration curve derived from other Presage dosimeters from the same batch, and (ii) by independent measurement of calibrated dose at a point using a novel detector comprised of a yttrium oxide based nanocrystalline scintillator, with a submillimeter active length. A microSBRT spinal treatment was delivered consisting of a 180° continuous arc at 225 kVp with a 20 × 10 mm field size. Dose response was evaluated using both the Presage/optical-CT 3D dosimetry system described above, and independent verification in select planes using EBT2 radiochromic film placed inside rodent-morphic dosimeters that had been sectioned in half. Results: Rodent-morphic 3D dosimeters were successfully produced from Presage radiochromic material by utilizing 3D printed molds of rat CT contours. The dosimeters were found to be compatible with optical-CT dose readout in high-resolution 3D (0.5 mm isotropic voxels) with minimal artifacts or noise. Cone-beam CT image guidance was possible with these dosimeters due to sufficient contrast between high-Z spinal inserts and tissue equivalent Presage material (CNR ∼10 on CBCT images). Dose at isocenter measured with optical-CT was found to agree with nanoscintillator measurement to within 2.8%. Maximum dose in line profiles taken through Presage and film dose slices agreed within 3%, with FWHM measurements through each profile found to agree within 2%. Conclusions: This work demonstrates the feasibility of using 3D printing technology to make anatomically accurate Presage rodent-morphic dosimeters incorporating spinal-mimicking inserts. High quality optical-CT 3D dosimetry is feasible on these dosimeters, despite the irregular surfaces and implanted inserts. The ability to measure dose distributions in anatomically accurate phantoms represents a powerful useful additional verification tool for preclinical microSBRT. PMID:25652497
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, Steven T.; Juang, Titania; Belley, Matthew D.
Purpose: Sophisticated small animal irradiators, incorporating cone-beam-CT image-guidance, have recently been developed which enable exploration of the efficacy of advanced radiation treatments in the preclinical setting. Microstereotactic-body-radiation-therapy (microSBRT) is one technique of interest, utilizing field sizes in the range of 1–15 mm. Verification of the accuracy of microSBRT treatment delivery is challenging due to the lack of available methods to comprehensively measure dose distributions in representative phantoms with sufficiently high spatial resolution and in 3 dimensions (3D). This work introduces a potential solution in the form of anatomically accurate rodent-morphic 3D dosimeters compatible with ultrahigh resolution (0.3 mm{sup 3}) opticalmore » computed tomography (optical-CT) dose read-out. Methods: Rodent-morphic dosimeters were produced by 3D-printing molds of rodent anatomy directly from contours defined on x-ray CT data sets of rats and mice, and using these molds to create tissue-equivalent radiochromic 3D dosimeters from Presage. Anatomically accurate spines were incorporated into some dosimeters, by first 3D printing the spine mold, then forming a high-Z bone equivalent spine insert. This spine insert was then set inside the tissue equivalent body mold. The high-Z spinal insert enabled representative cone-beam CT IGRT targeting. On irradiation, a linear radiochromic change in optical-density occurs in the dosimeter, which is proportional to absorbed dose, and was read out using optical-CT in high-resolution (0.5 mm isotropic voxels). Optical-CT data were converted to absolute dose in two ways: (i) using a calibration curve derived from other Presage dosimeters from the same batch, and (ii) by independent measurement of calibrated dose at a point using a novel detector comprised of a yttrium oxide based nanocrystalline scintillator, with a submillimeter active length. A microSBRT spinal treatment was delivered consisting of a 180° continuous arc at 225 kVp with a 20 × 10 mm field size. Dose response was evaluated using both the Presage/optical-CT 3D dosimetry system described above, and independent verification in select planes using EBT2 radiochromic film placed inside rodent-morphic dosimeters that had been sectioned in half. Results: Rodent-morphic 3D dosimeters were successfully produced from Presage radiochromic material by utilizing 3D printed molds of rat CT contours. The dosimeters were found to be compatible with optical-CT dose readout in high-resolution 3D (0.5 mm isotropic voxels) with minimal artifacts or noise. Cone-beam CT image guidance was possible with these dosimeters due to sufficient contrast between high-Z spinal inserts and tissue equivalent Presage material (CNR ∼10 on CBCT images). Dose at isocenter measured with optical-CT was found to agree with nanoscintillator measurement to within 2.8%. Maximum dose in line profiles taken through Presage and film dose slices agreed within 3%, with FWHM measurements through each profile found to agree within 2%. Conclusions: This work demonstrates the feasibility of using 3D printing technology to make anatomically accurate Presage rodent-morphic dosimeters incorporating spinal-mimicking inserts. High quality optical-CT 3D dosimetry is feasible on these dosimeters, despite the irregular surfaces and implanted inserts. The ability to measure dose distributions in anatomically accurate phantoms represents a powerful useful additional verification tool for preclinical microSBRT.« less
Microbiological testing of pharmaceuticals and cosmetics in Egypt.
Zeitoun, Hend; Kassem, Mervat; Raafat, Dina; AbouShlieb, Hamida; Fanaki, Nourhan
2015-12-09
Microbial contamination of pharmaceuticals poses a great problem to the pharmaceutical manufacturing process, especially from a medical as well as an economic point of view. Depending upon the product and its intended use, the identification of isolates should not merely be limited to the United States Pharmacopeia (USP) indicator organisms. Eighty-five pre-used non-sterile pharmaceuticals collected from random consumers in Egypt were examined for the eventual presence of bacterial contaminants. Forty-one bacterial contaminants were isolated from 31 of the tested preparations. These isolates were subjected to biochemical identification by both conventional tests as well as API kits, which were sufficient for the accurate identification of only 11 out of the 41 bacterial contaminants (26.8%) to the species level. The remaining isolates were inconclusively identified or showed contradictory results after using both biochemical methods. Using molecular methods, 24 isolates (58.5%) were successfully identified to the species level. Moreover, polymerase chain reaction (PCR) assays were compared to standard biochemical methods in the detection of pharmacopoeial bacterial indicators in artificially-contaminated pharmaceutical samples. PCR-based methods proved to be superior regarding speed, cost-effectiveness and sensitivity. Therefore, pharmaceutical manufacturers would be advised to adopt PCR-based methods in the microbiological quality testing of pharmaceuticals in the future.
Impedance Eduction in Large Ducts Containing Higher-Order Modes and Grazing Flow
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2017-01-01
Impedance eduction test data are acquired in ducts with small and large cross-sectional areas at the NASA Langley Research Center. An improved data acquisition system in the large duct has resulted in increased control of the acoustic energy in source modes and more accurate resolution of higher-order duct modes compared to previous tests. Two impedance eduction methods that take advantage of the improved data acquisition to educe the liner impedance in grazing flow are presented. One method measures the axial propagation constant of a dominant mode in the liner test section (by implementing the Kumarsean and Tufts algorithm) and educes the impedance from an exact analytical expression. The second method solves numerically the convected Helmholtz equation and minimizes an objective function to obtain the liner impedance. The two methods are tested first on data synthesized from an exact mode solution and then on measured data. Results show that when the methods are applied to data acquired in the larger duct with a dominant higher-order mode, the same impedance spectra are educed as that obtained in the small duct where only the plane wave mode propagates. This result holds for each higher-order mode in the large duct provided that the higher-order mode is sufficiently attenuated by the liner.
Stability and stabilization of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Brownlee, R. A.; Gorban, A. N.; Levesley, J.
2007-03-01
We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager-Gross-Krook method (LBGK). The LBGK scheme can be recognized as a discrete dynamical system generated by free flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modeling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface—the invariant film (up to second order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free flight. The main instability mechanisms are identified. The simplest recipes for stabilization add no artificial dissipation (up to second order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blowup. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a one-dimensional (1D) shock tube and the unsteady 2D flow around a square cylinder up to Reynolds number Rẽ20000 .
Cai, Xiao-Ming; Xu, Xiu-Xiu; Bian, Lei; Luo, Zong-Xiu; Chen, Zong-Mao
2015-12-01
Determination of volatile plant compounds in field ambient air is important to understand chemical communication between plants and insects and will aid the development of semiochemicals from plants for pest control. In this study, a thermal desorption-gas chromatography-mass spectrometry (TD-GC-MS) method was developed to measure ultra-trace levels of volatile plant compounds in field ambient air. The desorption parameters of TD, including sorbent tube material, tube desorption temperature, desorption time, and cold trap temperature, were selected and optimized. In GC-MS analysis, the selected ion monitoring mode was used for enhanced sensitivity and selectivity. This method was sufficiently sensitive to detect part-per-trillion levels of volatile plant compounds in field ambient air. Laboratory and field evaluation revealed that the method presented high precision and accuracy. Field studies indicated that the background odor of tea plantations contained some common volatile plant compounds, such as (Z)-3-hexenol, methyl salicylate, and (E)-ocimene, at concentrations ranging from 1 to 3400 ng m(-3). In addition, the background odor in summer was more abundant in quality and quantity than in autumn. Relative to previous methods, the TD-GC-MS method is more sensitive, permitting accurate qualitative and quantitative measurements of volatile plant compounds in field ambient air.
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
A robust, efficient and flexible method for staining myelinated axons in blocks of brain tissue.
Wahlsten, Douglas; Colbourne, Frederick; Pleus, Richard
2003-03-15
Previous studies have demonstrated the utility of the gold chloride method for en bloc staining of a bisected brain in mice and rats. The present study explores several variations in the method, assesses its reliability, and extends the limits of its application. We conclude that the method is very efficient, highly robust, sufficiently accurate for most purposes, and adaptable to many morphometric measures. We obtained acceptable staining of commissures in every brain, despite a wide variety of fixation methods. One-half could be stained 24 h after the brain was extracted and the other half could be stained months later. When staining failed because of an exhausted solution, the brain could be stained successfully in fresh solution. Relatively small changes were found in the sizes of commissures several weeks after initial fixation or staining. A half brain stained to reveal the mid-sagittal section could then be sectioned coronally and stained again in either gold chloride for myelin or cresyl violet for Nissl substance. Uncertainty, arising from pixelation of digitized images was far less than errors arising from human judgments about the histological limits of major commissures. Useful data for morphometric analysis were obtained by scanning the surface of a gold chloride stained block of brain with an inexpensive flatbed scanner.
Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model
NASA Astrophysics Data System (ADS)
Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.
2007-05-01
Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem and predict the strip crown, a new customized semi-analytical modeling technique that couples the Finite Element Method (FEM) with classical solid mechanics was developed to model the deflection of the rolls and strip while under load. The technique employed offers several important advantages over traditional methods to calculate strip crown, including continuity of elastic foundations, non-iterative solution when using predetermined foundation moduli, continuous third-order displacement fields, simple stress-field determination, and a comparatively faster solution time.
Determining thyroid {sup 131}I effective half-life for the treatment planning of Graves' disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willegaignon, Jose; Sapienza, Marcelo T.; Barberio Coura Filho, George
2013-02-15
Purpose: Thyroid {sup 131}I effective half-life (T{sub eff}) is an essential parameter in patient therapy when accurate radiation dose is desirable for producing an intended therapeutic outcome. Multiple {sup 131}I uptake measurements and resources from patients themselves and from nuclear medicine facilities are requisites for determining T{sub eff}, these being limiting factors when implementing the treatment planning of Graves' disease (GD) in radionuclide therapy. With the aim of optimizing this process, this study presents a practical, propitious, and accurate method of determining T{sub eff} for dosimetric purposes. Methods: A total of 50 patients with GD were included in this prospectivemore » study. Thyroidal {sup 131}I uptake was measured at 2-h, 6-h, 24-h, 48-h, 96-h, and 220-h postradioiodine administration. T{sub eff} was calculated by considering sets of two measured points (24-48-h, 24-96-h, and 24-220-h), sets of three (24-48-96-h, 24-48-220-h, and 24-96-220-h), and sets of four (24-48-96-220-h). Results: When considering all the measured points, the representative T{sub eff} for all the patients was 6.95 ({+-}0.81) days, whereas when using such sets of points as (24-220-h), (24-96-220-h), and (24-48-220-h), this was 6.85 ({+-}0.81), 6.90 ({+-}0.81), and 6.95 ({+-}0.81) days, respectively. According to the mean deviations 2.2 ({+-}2.4)%, 2.1 ({+-}2.0)%, and 0.04 ({+-}0.09)% found in T{sub eff}, calculated based on all the measured points in time, and with methods using the (24-220-h), (24-48-220-h), and (24-96-220-h) sets, respectively, no meaningful statistical difference was noted among the three methods (p > 0.500, t test). Conclusions: T{sub eff} obtained from only two thyroid {sup 131}I uptakes measured at 24-h and 220-h, besides proving to be sufficient, accurate enough, and easily applicable, attributes additional major cost-benefits for patients, and facilitates the application of the method for dosimetric purposes in the treatment planning of Graves' disease.« less
Raymond L. Czaplewski
2003-01-01
No thematic map is perfect. Some pixels or polygons are not accurately classified, no matter how well the map is crafted. Therefore, thematic maps need metadata that sufficiently characterize the nature and degree of these imperfections. To decision-makers, an accuracy assessment helps judge the risks of using imperfect geospatial data. To analysts, an accuracy...
Code of Federal Regulations, 2013 CFR
2013-01-01
... the required information will be sufficient. C. Property Survey Map. A current survey map of the... specimen trees. If a current survey does not exist, the most accurate document which is available will be... suitable drainage and landscaping plans later in the planning process. E. Market survey. A market survey...
ERIC Educational Resources Information Center
Trainin, Guy; Hayden, H. Emily; Wilson, Kathleen; Erickson, Joan
2016-01-01
National reports reveal one third of American fourth graders read below basic level on measures of comprehension. One critical component of comprehension is fluency: rapid, accurate, expressive reading with automaticity and prosody. Many fluency studies and classroom interventions focus only on reading rate, but this alone is not sufficient. This…
High-Accuracy Multisensor Geolocation Technology to Support Geophysical Data Collection at MEC Sites
2012-12-01
image with intensity data in a single step. Flash LiDAR can use both basic solutions to emit laser , either a single pulse with large aperture will...45 6. LASER SENSOR DEVELOPMENTS...and a terrestrial laser scanner (TLS). State-of-the-art GPS navigation allows for cm- accurate positioning in open areas where a sufficient number
75 FR 39154 - Setting the Time and Place for a Hearing Before an Administrative Law Judge
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-08
... independence, ALJs make their decisions free from agency pressure or pressure by a party to decide a particular... challenging job facing our ALJs: holding a sufficient number of hearings and rendering accurate, well-reasoned... rules exerts pressure on ALJs to decide claims in a particular way, precludes an ALJ from developing the...
Finite element analyses of two dimensional, anisotropic heat transfer in wood
John F. Hunt; Hongmei Gu
2004-01-01
The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Inputting basic orthogonal properties of the wood material alone are not sufficient for accurate modeling because wood is a combination of porous fiber cells that are aligned and mis-...
ERIC Educational Resources Information Center
Davis, Gregory J.; Gibson, Bradley S.
2012-01-01
Voluntary shifts of attention are often motivated in experimental contexts by using well-known symbols that accurately predict the direction of targets. The authors report 3 experiments, which showed that the presentation of predictive spatial information does not provide sufficient incentive to elicit voluntary shifts of attention. For instance,…
Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savas, Omer
Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.
The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less
Protein 3D Structure Computed from Evolutionary Sequence Variation
Sheridan, Robert; Hopf, Thomas A.; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris
2011-01-01
The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing. In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy. We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues., including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7–4.8 Å Cα-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein structures, new strategies in protein and drug design, and the identification of functional genetic variants in normal and disease genomes. PMID:22163331
Bennett, Gordon D.; Patten, E.P.
1962-01-01
This report describes the theory and field procedures for determining the transmissibility and storage coefficients and the original hydrostatic head of each aquifer penetrated by a multiaquifer well. The procedure involves pumping the well in such a manner that the drawdown of water level is constant while the discharges of the different aquifers are measured by means of borehole flowmeters. The theory is developed by analogy to the heat-flow problem solved by Smith. The internal discharge between aquifers after the well is completed is analyzed as the first step. Pumping at constant, drawdown constitutes the second step. Transmissibility and storage coefficients are determined by a method described by Jacob and Lohman, after the original internal discharge to or from the aquifer has been compensated for in the calculations. The original hydrostatic head of each aquifer is then determined by resubstituting the transmissibility and storage coefficients into the first step of the analysis. The method was tested on a well in Chester County, Pa., but the results were not entirely satisfactory, owing to the lack of sufficiently accurate methods of flow measurement and, probably, to the effects of entrance losses in the well. The determinations of the transmissibility coefficient and static head can be accepted as having order-of-magnitude significance, but the determinations of the storage coefficient, which is highly sensitive to experimental error, must be rejected. It is felt that better results may be achieved in the future, as more reliable devices for metering the flow become available and as more is learned concerning the nature of entrance losses. If accurate data can be obtained, recently developed techniques of digital or analog computation may permit determination of the response of each aquifer in the well to any form of pumping.
NASA Astrophysics Data System (ADS)
Arida, Maya Ahmad
In 1972 sustainable development concept existed and during The years it became one of the most important solution to save natural resources and energy, but now with rising energy costs and increasing awareness of the effect of global warming, the development of building energy saving methods and models become apparently more necessary for sustainable future. According to U.S. Energy Information Administration EIA (EIA), today buildings in the U.S. consume 72 percent of electricity produced, and use 55 percent of U.S. natural gas. Buildings account for about 40 percent of the energy consumed in the United States, more than industry and transportation. Of this energy, heating and cooling systems use about 55 percent. If energy-use trends continue, buildings will become the largest consumer of global energy by 2025. This thesis proposes procedures and analysis techniques for building energy system and optimization methods using time series auto regression artificial neural networks. The model predicts whole building energy consumptions as a function of four input variables, dry bulb and wet bulb outdoor air temperatures, hour of day and type of day. The proposed model and the optimization process are tested using data collected from an existing building located in Greensboro, NC. The testing results show that the model can capture very well the system performance, and The optimization method was also developed to automate the process of finding the best model structure that can produce the best accurate prediction against the actual data. The results show that the developed model can provide results sufficiently accurate for its use in various energy efficiency and saving estimation applications.
Crowdsourcing for translational research: analysis of biomarker expression using cancer microarrays
Lawson, Jonathan; Robinson-Vyas, Rupesh J; McQuillan, Janette P; Paterson, Andy; Christie, Sarah; Kidza-Griffiths, Matthew; McDuffus, Leigh-Anne; Moutasim, Karwan A; Shaw, Emily C; Kiltie, Anne E; Howat, William J; Hanby, Andrew M; Thomas, Gareth J; Smittenaar, Peter
2017-01-01
Background: Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing – enlisting help from the public – is a sufficiently accurate method to score such samples. Methods: We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers – bladder/ki67, lung/EGFR, and oesophageal/CD8 – to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants' accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples. Results: We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively). Conclusions: These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains. PMID:27959886
Combining dynamic and ECG-gated ⁸²Rb-PET for practical implementation in the clinic.
Sayre, George A; Bacharach, Stephen L; Dae, Michael W; Seo, Youngho
2012-01-01
For many cardiac clinics, list-mode PET is impractical. Therefore, separate dynamic and ECG-gated acquisitions are needed to detect harmful stenoses, indicate affected coronary arteries, and estimate stenosis severity. However, physicians usually order gated studies only because of dose, time, and cost limitations. These gated studies are limited to detection. In an effort to remove these limitations, we developed a novel curve-fitting algorithm [incomplete data (ICD)] to accurately calculate coronary flow reserve (CFR) from a combined dynamic-ECG protocol of a length equal to a typical gated scan. We selected several retrospective dynamic studies to simulate shortened dynamic acquisitions of the combined protocol and compared (a) the accuracy of ICD and a nominal method in extrapolating the complete functional form of arterial input functions (AIFs); and (b) the accuracy of ICD and ICD-AP (ICD with a-posteriori knowledge of complete-data AIFs) in predicting CFRs. According to the Akaike information criterion, AIFs predicted by ICD were more accurate than those predicted by the nominal method in 11 out of 12 studies. CFRs predicted by ICD and ICD-AP were similar to complete-data predictions (PICD=0.94 and PICD-AP=0.91) and had similar average errors (eICD=2.82% and eICD-AP=2.79%). According to a nuclear cardiologist and an expert analyst of PET data, both ICD and ICD-AP predicted CFR values with sufficient accuracy for the clinic. Therefore, by using our method, physicians in cardiac clinics would have access to the necessary amount of information to differentiate between single-vessel and triple-vessel disease for treatment decision making.
Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent
2018-01-01
Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556
Allen, Vivian; Paxton, Heather; Hutchinson, John R
2009-09-01
Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Alshaery, Aisha; Ebaid, Abdelhalim
2017-11-01
Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerouali, K; Aubry, J; Doucet, R
2016-06-15
Purpose: To implement the new EBT-XD Gafchromic films for accurate dosimetric and geometric validation of stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) CyberKnife (CK) patient specific QA. Methods: Film calibration was performed using a triplechannel film analysis on an Epson 10000XL scanner. Calibration films were irradiated using a Varian Clinac 21EX flattened beam (0 to 20 Gy), to ensure sufficient dose homogeneity. Films were scanned to a resolution of 0.3 mm, 24 hours post irradiation following a well-defined protocol. A set of 12 QA was performed for several types of CK plans: trigeminal neuralgia, brain metastasis, prostate andmore » lung tumors. A custom made insert for the CK head phantom has been manufactured to yield an accurate measured to calculated dose registration. When the high dose region was large enough, absolute dose was also measured with an ionization chamber. Dose calculation is performed using MultiPlan Ray-tracing algorithm for all cases since the phantom is mostly made from near water-equivalent plastic. Results: Good agreement (<2%) was found between the dose to the chamber and the film, when a chamber measurement was possible The average dose difference and standard deviations between film measurements and TPS calculations were respectively 1.75% and 3%. The geometric accuracy has been estimated to be <1 mm, combining robot positioning uncertainty and film registration to calculated dose. Conclusion: Patient specific QA measurements using EBT-XD films yielded a full 2D dose plane with high spatial resolution and acceptable dose accuracy. This method is particularly promising for trigeminal neuralgia plan QA, where the positioning of the spatial dose distribution is equally or more important than the absolute delivered dose to achieve clinical goals.« less
Accurate determinations of one-bond 13C-13C couplings in 13C-labeled carbohydrates
NASA Astrophysics Data System (ADS)
Azurmendi, Hugo F.; Freedberg, Darón I.
2013-03-01
Carbon plays a central role in the molecular architecture of carbohydrates, yet the availability of accurate methods for 1DCC determination has not been sufficiently explored, despite the importance that such data could play in structural studies of oligo- and polysaccharides. Existing methods require fitting intensity ratios of cross- to diagonal-peaks as a function of the constant-time (CT) in CT-COSY experiments, while other methods utilize measurement of peak separation. The former strategies suffer from complications due to peak overlap, primarily in regions close to the diagonal, while the latter strategies are negatively impacted by the common occurrence of strong coupling in sugars, which requires a reliable assessment of their influence in the context of RDC determination. We detail a 13C-13C CT-COSY method that combines a variation in the CT processed with diagonal filtering to yield 1JCC and RDCs. The strategy, which relies solely on cross-peak intensity modulation, is inspired in the cross-peak nulling method used for JHH determinations, but adapted and extended to applications where, like in sugars, large one-bond 13C-13C couplings coexist with relatively small long-range couplings. Because diagonal peaks are not utilized, overlap problems are greatly alleviated. Thus, one-bond couplings can be determined from different cross-peaks as either active or passive coupling. This results in increased accuracy when more than one determination is available, and in more opportunities to measure a specific coupling in the presence of severe overlap. In addition, we evaluate the influence of strong couplings on the determination of RDCs by computer simulations. We show that individual scalar couplings are notably affected by the presence of strong couplings but, at least for the simple cases studied, the obtained RDC values for use in structural calculations were not, because the errors introduced by strong couplings for the isotropic and oriented phases are very similar and therefore cancel when calculating the difference to determine 1DCC values.
Measurement of Body Composition: is there a Gold Standard?
Branski, Ludwik K; Norbury, William B; Herndon, David N; Chinkes, David L; Cochran, Amalia; Suman, Oscar; Benjamin, Deb; Jeschke, Marc G
2015-01-01
Background Maintaining lean body mass (LBM) after a severe burn is an essential goal of modern burn treatment. An accurate determination of LBM is necessary for short- and longterm therapeutic decisions. The aim of this study was to compare 2 measurement methods for body composition, wholebody potassium counting (K count) and dual x-ray absorptiometry (DEXA), in a large prospective clinical trial in severely burned pediatric patients. Methods Two-hundred seventy-nine patients admitted with burns covering 40% of total body surface area (TBSA) were enrolled in the study. Patients enrolled were controls or received long-term treatment with recombinant human growth hormone (rhGH). Near-simultaneous measurements of LBM with DEXA and fat-free mass (FFM) with K count were performed at hospital discharge and at 6, 9, 12, 18, and 24 months post injury. Results were correlated using Pearson’s regression analysis. Agreement between the 2 methods was analyzed with the Bland-Altman method. Results Age, gender distribution, weight, burn size, and admission time from injury were not significantly different between control and treatment groups. rhGH and control patients at all time points postburn showed a good correlation between LBM and FFM measurements (R2 between 0.9 and 0.95). Bland-Altman revealed that the mean bias and 95% limits of agreement depended only on patient weight and not on treatment or time postburn. The 95% limits ranged from 0.1 ± 2.9 kg for LBM or FFM in 7- to 18-kg patients to 16.3 ± 17.8 kg for LBM or FFM in patients >60 kg. Conclusions DEXA can provide a sufficiently accurate determination of LBM and changes in body composition, but a correction factor must be included for older children and adolescents with more LBM. DEXA scans are easier, cheaper, and less stressful for the patient, and this method should be used rather than the K count. PMID:19884353
Filter accuracy for the Lorenz 96 model: Fixed versus adaptive observation operators
Stuart, Andrew M.; Shukla, Abhishek; Sanz-Alonso, Daniel; ...
2016-02-23
In the context of filtering chaotic dynamical systems it is well-known that partial observations, if sufficiently informative, can be used to control the inherent uncertainty due to chaos. The purpose of this paper is to investigate, both theoretically and numerically, conditions on the observations of chaotic systems under which they can be accurately filtered. In particular, we highlight the advantage of adaptive observation operators over fixed ones. The Lorenz ’96 model is used to exemplify our findings. Here, we consider discrete-time and continuous-time observations in our theoretical developments. We prove that, for fixed observation operator, the 3DVAR filter can recovermore » the system state within a neighbourhood determined by the size of the observational noise. It is required that a sufficiently large proportion of the state vector is observed, and an explicit form for such sufficient fixed observation operator is given. Numerical experiments, where the data is incorporated by use of the 3DVAR and extended Kalman filters, suggest that less informative fixed operators than given by our theory can still lead to accurate signal reconstruction. Adaptive observation operators are then studied numerically; we show that, for carefully chosen adaptive observation operators, the proportion of the state vector that needs to be observed is drastically smaller than with a fixed observation operator. Indeed, we show that the number of state coordinates that need to be observed may even be significantly smaller than the total number of positive Lyapunov exponents of the underlying system.« less
Filter accuracy for the Lorenz 96 model: Fixed versus adaptive observation operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuart, Andrew M.; Shukla, Abhishek; Sanz-Alonso, Daniel
In the context of filtering chaotic dynamical systems it is well-known that partial observations, if sufficiently informative, can be used to control the inherent uncertainty due to chaos. The purpose of this paper is to investigate, both theoretically and numerically, conditions on the observations of chaotic systems under which they can be accurately filtered. In particular, we highlight the advantage of adaptive observation operators over fixed ones. The Lorenz ’96 model is used to exemplify our findings. Here, we consider discrete-time and continuous-time observations in our theoretical developments. We prove that, for fixed observation operator, the 3DVAR filter can recovermore » the system state within a neighbourhood determined by the size of the observational noise. It is required that a sufficiently large proportion of the state vector is observed, and an explicit form for such sufficient fixed observation operator is given. Numerical experiments, where the data is incorporated by use of the 3DVAR and extended Kalman filters, suggest that less informative fixed operators than given by our theory can still lead to accurate signal reconstruction. Adaptive observation operators are then studied numerically; we show that, for carefully chosen adaptive observation operators, the proportion of the state vector that needs to be observed is drastically smaller than with a fixed observation operator. Indeed, we show that the number of state coordinates that need to be observed may even be significantly smaller than the total number of positive Lyapunov exponents of the underlying system.« less
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
FASTSIM2: a second-order accurate frictional rolling contact algorithm
NASA Astrophysics Data System (ADS)
Vollebregt, E. A. H.; Wilders, P.
2011-01-01
In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.
2014-03-01
A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.
Gradient field of undersea sound speed structure extracted from the GNSS-A oceanography
NASA Astrophysics Data System (ADS)
Yokota, Yusuke; Ishikawa, Tadashi; Watanabe, Shun-ichi
2018-06-01
After the twenty-first century, the Global Navigation Satellite System-Acoustic ranging (GNSS-A) technique detected geodetic events such as co- and postseismic effects following the 2011 Tohoku-oki earthquake and slip-deficit rate distributions along the Nankai Trough subduction zone. Although these are extremely important discoveries in geodesy and seismology, more accurate observation that can capture temporal and spatial changes are required for future earthquake disaster prevention. In order to upgrade the accuracy of the GNSS-A technique, it is necessary to understand disturbances in undersea sound speed structures, which are major error sources. In particular, detailed temporal and spatial variations are difficult to observe accurately, and their effect was not sufficiently extracted in previous studies. In the present paper, we reconstruct an inversion scheme for extracting the effect from GNSS-A data and experimentally apply this scheme to the seafloor sites around the Kuroshio. The extracted gradient effects are believed to represent not only a broad sound speed structure but also a more detailed structure generated in the unsteady disturbance. The accuracy of the seafloor positioning was also improved by this new method. The obtained results demonstrate the feasibility of using the GNSS-A technique to detect a seafloor crustal deformation for oceanography research.
Baiardi, A.; Paoloni, L.; Barone, V.; Zakrzewski, V.G.; Ortiz, J.V.
2017-01-01
The analysis of photoelectron spectra is usually facilitated by quantum mechanical simulations. Due to the recent improvement of experimental techniques, the resolution of experimental spectra is rapidly increasing, and the inclusion of vibrational effects is usually mandatory to obtain a reliable reproduction of the spectra. With the aim of defining a robust computational protocol, a general time-independent formulation to compute different kinds of vibrationally-resolved electronic spectra has been generalized to support also photoelectron spectroscopy. The electronic structure data underlying the simulation are computed using different electron propagator approaches. In addition to the more standard approaches, a new and robust implementation of the second-order self-energy approximation of the electron propagator based on a transition operator reference (TOEP2) is presented. To validate our implementation, a series of molecules has been used as test cases. The result of the simulations shows that, for ultraviolet photoionization spectra, the more accurate non-diagonal approaches are needed to obtain a reliable reproduction of vertical ionization energies, but diagonal approaches are sufficient for energy gradients and pole strengths. For X-ray photoelectron spectroscopy, the TOEP2 approach, besides being more efficient, is also the most accurate in the reproduction of both vertical ionization energies and vibrationally-resolved bandshapes. PMID:28521087
Diffusion kurtosis imaging can efficiently assess the glioma grade and cellular proliferation.
Jiang, Rifeng; Jiang, Jingjing; Zhao, Lingyun; Zhang, Jiaxuan; Zhang, Shun; Yao, Yihao; Yang, Shiqi; Shi, Jingjing; Shen, Nanxi; Su, Changliang; Zhang, Ju; Zhu, Wenzhen
2015-12-08
Conventional diffusion imaging techniques are not sufficiently accurate for evaluating glioma grade and cellular proliferation, which are critical for guiding glioma treatment. Diffusion kurtosis imaging (DKI), an advanced non-Gaussian diffusion imaging technique, has shown potential in grading glioma; however, its applications in this tumor have not been fully elucidated. In this study, DKI and diffusion weighted imaging (DWI) were performed on 74 consecutive patients with histopathologically confirmed glioma. The kurtosis and conventional diffusion metric values of the tumor were semi-automatically obtained. The relationships of these metrics with the glioma grade and Ki-67 expression were evaluated. The diagnostic efficiency of these metrics in grading was further compared. It was demonstrated that compared with the conventional diffusion metrics, the kurtosis metrics were more promising imaging markers in distinguishing high-grade from low-grade gliomas and distinguishing among grade II, III and IV gliomas; the kurtosis metrics also showed great potential in the prediction of Ki-67 expression. To our best knowledge, we are the first to reveal the ability of DKI to assess the cellular proliferation of gliomas, and to employ the semi-automatic method for the accurate measurement of gliomas. These results could have a significant impact on the diagnosis and subsequent therapy of glioma.
Improved prediction of antibody VL–VH orientation
Marze, Nicholas A.; Lyskov, Sergey; Gray, Jeffrey J.
2016-01-01
Antibodies are important immune molecules with high commercial value and therapeutic interest because of their ability to bind diverse antigens. Computational prediction of antibody structure can quickly reveal valuable information about the nature of these antigen-binding interactions, but only if the models are of sufficient quality. To achieve high model quality during complementarity-determining region (CDR) structural prediction, one must account for the VL–VH orientation. We developed a novel four-metric VL–VH orientation coordinate frame. Additionally, we extended the CDR grafting protocol in RosettaAntibody with a new method that diversifies VL–VH orientation by using 10 VL–VH orientation templates rather than a single one. We tested the multiple-template grafting protocol on two datasets of known antibody crystal structures. During the template-grafting phase, the new protocol improved the fraction of accurate VL–VH orientation predictions from only 26% (12/46) to 72% (33/46) of targets. After the full RosettaAntibody protocol, including CDR H3 remodeling and VL–VH re-orientation, the new protocol produced more candidate structures with accurate VL–VH orientation than the standard protocol in 43/46 targets (93%). The improved ability to predict VL–VH orientation will bolster predictions of other parts of the paratope, including the conformation of CDR H3, a grand challenge of antibody homology modeling. PMID:27276984
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Monson, Keith L.
1998-03-01
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.
Real-time dynamics of matrix quantum mechanics beyond the classical approximation
NASA Astrophysics Data System (ADS)
Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas
2018-03-01
We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases
NASA Technical Reports Server (NTRS)
Woodruff, Stephen
2016-01-01
NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.
Mini 3D for shallow gas reconnaissance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallieres, T. des; Enns, D.; Kuehn, H.
1996-12-31
The Mini 3D project was undertaken by TOTAL and ELF with the support of CEPM (Comite d`Etudes Petrolieres et Marines) to define an economical method of obtaining 3D seismic HR data for shallow gas assessment. An experimental 3D survey was carried out with classical site survey techniques in the North Sea. From these data 19 simulations, were produced to compare different acquisition geometries ranging from dual, 600 m long cables to a single receiver. Results show that short offset, low fold and very simple streamer positioning are sufficient to give a reliable 3D image of gas charged bodies. The 3Dmore » data allow a much more accurate risk delineation than 2D HR data. Moreover on financial grounds Mini-3D is comparable in cost to a classical HR 2D survey. In view of these results, such HR 3D should now be the standard for shallow gas surveying.« less
Osawa, Takeshi; Okawa, Shigenori; Kurokawa, Shunji; Ando, Shinichiro
2016-12-01
In this study, we propose a method for estimating the risk of agricultural damage caused by an invasive species when species-specific information is lacking. We defined the "risk" as the product of the invasion probability and the area of potentially damaged crop for production. As a case study, we estimated the risk imposed by an invasive weed, Sicyos angulatus, based on simple cellular simulations and governmental data on the area of crop that could potentially be damaged in Miyagi Prefecture, Japan. Simulation results revealed that the current distribution range was sufficiently accurate for practical purposes. Using these results and records of crop areas, we present risk maps for S. angulatus in agricultural fields. Managers will be able to use these maps to rapidly establish a management plan with minimal cost. Our approach will be valuable for establishing a management plan before or during the early stages of invasion.
Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji
2016-08-18
Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.
NASA Technical Reports Server (NTRS)
Tischler, M. B.; Barlow, J. B.
1980-01-01
The properties of the flat spin mode of a general aviation configuration have been studied through analysis of rotary balance data, numerical simulation, and analytical study of the equilibrium state. The equilibrium state is predicted well from rotary balance data. The variations of yawing moment and pitching moment as functions of sideslip have been shown to be of great importance in obtaining accurate modeling. These dependencies are not presently available with sufficient accuracy from previous tests or theories. The stability of the flat spin mode has been examined extensively using numerical linearization, classical perturbation methods, and reduced order modeling. The stability exhibited by the time histories and the eigenvalue analyses is shown to be strongly dependent on certain static cross derivatives and more so on the dynamic derivatives. Explicit stability criteria are obtained from the reduced order models.
Modelling and tuning for a time-delayed vibration absorber with friction
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxu; Xu, Jian; Ji, Jinchen
2018-06-01
This paper presents an integrated analytical and experimental study to the modelling and tuning of a time-delayed vibration absorber (TDVA) with friction. In system modelling, this paper firstly applies the method of averaging to obtain the frequency response function (FRF), and then uses the derived FRF to evaluate the fitness of different friction models. After the determination of the system model, this paper employs the obtained FRF to evaluate the vibration absorption performance with respect to tunable parameters. A significant feature of the TDVA with friction is that its stability is dependent on the excitation parameters. To ensure the stability of the time-delayed control, this paper defines a sufficient condition for stability estimation. Experimental measurements show that the dynamic response of the TDVA with friction can be accurately predicted and the time-delayed control can be precisely achieved by using the modelling and tuning technique provided in this paper.
Structural analysis and evaluation of actual PC bridge using 950 keV/3.95 MeV X-band linacs
NASA Astrophysics Data System (ADS)
Takeuchi, H.; Yano, R.; Ozawa, I.; Mitsuya, Y.; Dobashi, K.; Uesaka, M.; Kusano, J.; Oshima, Y.; Ishida, M.
2017-07-01
In Japan, bridges constructed during the strong economic growth era are facing an aging problem and advanced maintenance methods have become strongly required recently. To meet this demand, we develop the on-site inspection system using 950 keV/3.95 MeV X-band (9.3 GHz) linac X-ray sources. These systems can visualize in seconds the inner states of bridges, including cracks of concrete, location and state of tendons (wires) and other imperfections. At the on-site inspections, 950 keV linac exhibited sufficient performance. But, for thicker concrete, it is difficult to visualize the internal state by 950 keV linac. Therefore, we proceeded the installation of 3.95 MeV linac for on-site bridge inspection. In addition, for accurate evaluation, verification on the parallel motion CT technique and FEM analysis are in progress.
Principles for the dynamic maintenance of cortical polarity
Marco, Eugenio; Wedlich-Soldner, Roland; Li, Rong; Altschuler, Steven J.; Wu, Lani F.
2007-01-01
Summary Diverse cell types require the ability to dynamically maintain polarized membrane protein distributions through balancing transport and diffusion. However, design principles underlying dynamically maintained cortical polarity are not well understood. Here we constructed a mathematical model for characterizing the morphology of dynamically polarized protein distributions. We developed analytical approaches for measuring all model parameters from single-cell experiments. We applied our methods to a well-characterized system for studying polarized membrane proteins: budding yeast cells expressing activated Cdc42. We found that balanced diffusion and colocalized transport to and from the plasma membrane were sufficient for accurately describing polarization morphologies. Surprisingly, the model predicts that polarized regions are defined with a precision that is nearly optimal for measured transport rates, and that polarity can be dynamically stabilized through positive feedback with directed transport. Our approach provides a step towards understanding how biological systems shape spatially precise, unambiguous cortical polarity domains using dynamic processes. PMID:17448998
Application of LANDSAT-2 to the management of Delaware's marine and wetland resources
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator); Bartlett, D.; Philpot, W.; Davis, G.
1976-01-01
The author has identified the following significant results. The spectral signature of the acid waste disposal plume investigated 38 miles off the Delaware coast, is caused primarily by scattering from particles in the form of suspended ferric iron floc. In comparison, the absorption caused by the dissolved fraction of iron and other substances has a negligible effect on the spectral signature. Ocean waste disposal plumes were observed by LANDSAT-1 and -2 during dump up to 54 hours afer dump during fourteen different passes over the Delaware test site. The spatial resolution, radiometric sensitivity, and spectral band location of the LANDSAT multispectral scanner are sufficient to identify the location of ocean disposal plumes. The movement and dispersion of ocean waste disposal plumes can be estimated if the original dump location, time, and injection method are known. Operating LANDSAT in the high gain mode helps to determine plume dispersion more accurately.
NASA Technical Reports Server (NTRS)
Gardner, Adrian
2010-01-01
National Aeronautical and Space Administration (NASA) weather and atmospheric environmental organizations are insatiable consumers of geophysical, hydrometeorological and solar weather statistics. The expanding array of internet-worked sensors producing targeted physical measurements has generated an almost factorial explosion of near real-time inputs to topical statistical datasets. Normalizing and value-based parsing of such statistical datasets in support of time-constrained weather and environmental alerts and warnings is essential, even with dedicated high-performance computational capabilities. What are the optimal indicators for advanced decision making? How do we recognize the line between sufficient statistical sampling and excessive, mission destructive sampling ? How do we assure that the normalization and parsing process, when interpolated through numerical models, yields accurate and actionable alerts and warnings? This presentation will address the integrated means and methods to achieve desired outputs for NASA and consumers of its data.
Calculations of turbulent separated flows
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T. H.
1993-01-01
A numerical study of incompressible turbulent separated flows is carried out by using two-equation turbulence models of the K-epsilon type. On the basis of realizability analysis, a new formulation of the eddy-viscosity is proposed which ensures the positiveness of turbulent normal stresses - a realizability condition that most existing two-equation turbulence models are unable to satisfy. The present model is applied to calculate two backward-facing step flows. Calculations with the standard K-epsilon model and a recently developed RNG-based K-epsilon model are also made for comparison. The calculations are performed with a finite-volume method. A second-order accurate differencing scheme and sufficiently fine grids are used to ensure the numerical accuracy of solutions. The calculated results are compared with the experimental data for both mean and turbulent quantities. The comparison shows that the present model performs quite well for separated flows.
Accidental Turbulent Discharge Rate Estimation from Videos
NASA Astrophysics Data System (ADS)
Ibarra, Eric; Shaffer, Franklin; Savaş, Ömer
2015-11-01
A technique to estimate the volumetric discharge rate in accidental oil releases using high speed video streams is described. The essence of the method is similar to PIV processing, however the cross correlation is carried out on the visible features of the efflux, which are usually turbulent, opaque and immiscible. The key step in the process is to perform a pixelwise time filtering on the video stream, in which the parameters are commensurate with the scales of the large eddies. The velocity field extracted from the shell of visible features is then used to construct an approximate velocity profile within the discharge. The technique has been tested on laboratory experiments using both water and oil jets at Re ~105 . The technique is accurate to 20%, which is sufficient for initial responders to deploy adequate resources for containment. The software package requires minimal user input and is intended for deployment on an ROV in the field. Supported by DOI via NETL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Baer, E; Jee, K
Purpose: For proton therapy, an accurate model of CT HU to relative stopping power (RSP) conversion is essential. In current practice, validation of these models relies solely on measurements of tissue substitutes with standard compositions. Validation based on real tissue samples would be much more direct and can address variations between patients. This study intends to develop an efficient and accurate system based on the concept of dose extinction to measure WEPL and retrieve RSP in biological tissue in large number of types. Methods: A broad AP proton beam delivering a spread out Bragg peak (SOBP) is used to irradiatemore » the samples with a Matrixx detector positioned immediately below. A water tank was placed on top of the samples, with the water level controllable in sub-millimeter by a remotely controlled dosing pump. While gradually lowering the water level with beam on, the transmission dose was recorded at 1 frame/sec. The WEPL were determined as the difference between the known beam range of the delivered SOBP (80%) and the water level corresponding to 80% of measured dose profiles in time. A Gammex 467 phantom was used to test the system and various types of biological tissue was measured. Results: RSP for all Gammex inserts, expect the one made with lung-450 material (<2% error), were determined within ±0.5% error. Depends on the WEPL of investigated phantom, a measurement takes around 10 min, which can be accelerated by a faster pump. Conclusion: Based on the concept of dose extinction, a system was explored to measure WEPL efficiently and accurately for a large number of samples. This allows the validation of CT HU to stopping power conversions based on large number of samples and real tissues. It also allows the assessment of beam uncertainties due to variations over patients, which issue has never been sufficiently studied before.« less