Sample records for accurate source parameters

  1. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  2. Labeling Projections on Published Maps

    USGS Publications Warehouse

    Snyder, John P.

    1987-01-01

    To permit accurate scaling on a map, and to use the map as a source of accurate positions in the transfer of data, certain parameters - such as the standard parallels selected for a conic projection - must be stated on the map. This information is often missing on published maps. Three current major world atlases are evaluated with respect to map projection identification. The parameters essential for the projections used in these three atlases are discussed and listed. These parameters should be stated on any map based on the same projection.

  3. TRIPPy: Python-based Trailed Source Photometry

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-05-01

    TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.

  4. Estimating winter wheat phenological parameters: Implications for crop modeling

    USDA-ARS?s Scientific Manuscript database

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  5. TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Grady, K; Davis, S; Seuntjens, J

    Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10more » × 10 cm{sup 2} Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm{sup 2} PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less

  6. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  7. Estimating virus occurrence using Bayesian modeling in multiple drinking water systems of the United States

    USGS Publications Warehouse

    Varughese, Eunice A.; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer S; Fout, G. Shay; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.; Keely, Scott P

    2017-01-01

    incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters.

  8. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  9. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  10. Seismic source parameters of the induced seismicity at The Geysers geothermal area, California, by a generalized inversion approach

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia

    2017-04-01

    The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.

  11. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  12. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  13. Directly comparing gravitational wave data to numerical relativity simulations: systematics

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Zlochower, Yosef; Shoemaker, Deirdre; Lovelace, Geoffrey; Pankow, Christopher; Brady, Patrick; Scheel, Mark; Pfeiffer, Harald; Ossokine, Serguei

    2017-01-01

    We compare synthetic data directly to complete numerical relativity simulations of binary black holes. In doing so, we circumvent ad-hoc approximations introduced in semi-analytical models previously used in gravitational wave parameter estimation and compare the data against the most accurate waveforms including higher modes. In this talk, we focus on the synthetic studies that test potential sources of systematic errors. We also run ``end-to-end'' studies of intrinsically different synthetic sources to show we can recover parameters for different systems.

  14. Lithospheric Models of the Middle East to Improve Seismic Source Parameter Determination/Event Location Accuracy

    DTIC Science & Technology

    2012-09-01

    State Award Nos. DE-AC52-07NA27344/24.2.3.2 and DOS_SIAA-11-AVC/NMA-1 ABSTRACT The Middle East is a tectonically complex and seismically...active region. The ability to accurately locate earthquakes and other seismic events in this region is complicated by tectonics , the uneven...and seismic source parameters show that this activity comes from tectonic events. This work is informed by continuous or event-based regional

  15. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  16. Physiological motion modeling for organ-mounted robots.

    PubMed

    Wood, Nathan A; Schwartzman, David; Zenati, Marco A; Riviere, Cameron N

    2017-12-01

    Organ-mounted robots passively compensate heartbeat and respiratory motion. In model-guided procedures, this motion can be a significant source of information that can be used to aid in localization or to add dynamic information to static preoperative maps. Models for estimating periodic motion are proposed for both position and orientation. These models are then tested on animal data and optimal orders are identified. Finally, methods for online identification are demonstrated. Models using exponential coordinates and Euler-angle parameterizations are as accurate as models using quaternion representations, yet require a quarter fewer parameters. Models which incorporate more than four cardiac or three respiration harmonics are no more accurate. Finally, online methods estimate model parameters as accurately as offline methods within three respiration cycles. These methods provide a complete framework for accurately modelling the periodic deformation of points anywhere on the surface of the heart in a closed chest. Copyright © 2017 John Wiley & Sons, Ltd.

  17. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  18. Improved response functions for gamma-ray skyshine analyses

    NASA Astrophysics Data System (ADS)

    Shultis, J. K.; Faw, R. E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.

  19. Improved response functions for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less

  20. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  1. SENSITIVITY OF STRUCTURAL RESPONSE TO GROUND MOTION SOURCE AND SITE PARAMETERS.

    USGS Publications Warehouse

    Safak, Erdal; Brebbia, C.A.; Cakmak, A.S.; Abdel Ghaffar, A.M.

    1985-01-01

    Designing structures to withstand earthquakes requires an accurate estimation of the expected ground motion. While engineers use the peak ground acceleration (PGA) to model the strong ground motion, seismologists use physical characteristics of the source and the rupture mechanism, such as fault length, stress drop, shear wave velocity, seismic moment, distance, and attenuation. This study presents a method for calculating response spectra from seismological models using random vibration theory. It then investigates the effect of various source and site parameters on peak response. Calculations are based on a nonstationary stochastic ground motion model, which can incorporate all the parameters both in frequency and time domains. The estimation of the peak response accounts for the effects of the non-stationarity, bandwidth and peak correlations of the response.

  2. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  3. Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack

    NASA Astrophysics Data System (ADS)

    Nalegaev, S. S.; Petrov, N. V.

    Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.

  4. TRIPPy: Trailed Image Photometry in Python

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michaël; Pike, Rosemary E.; Kavelaars, J. J.; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-06-01

    Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.

  5. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  6. Determination of Destress Blasting Effectiveness Using Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Wojtecki, Łukasz; Mendecki, Maciej J.; Zuberek, Wacaław M.

    2017-12-01

    Underground mining of coal seams in the Upper Silesian Coal Basin is currently performed under difficult geological and mining conditions. The mining depth, dislocations (faults and folds) and mining remnants are responsible for rockburst hazard in the highest degree. This hazard can be minimized by using active rockburst prevention, where destress blastings play an important role. Destress blastings in coal seams aim to destress the local stress concentrations. These blastings are usually performed from the longwall face to decrease the stress level ahead of the longwall. An accurate estimation of active rockburst prevention effectiveness is important during mining under disadvantageous geological and mining conditions, which affect the risk of rockburst. Seismic source parameters characterize the focus of tremor, which may be useful in estimating the destress blasting effects. Investigated destress blastings were performed in coal seam no. 507 during its longwall mining in one of the coal mines in the Upper Silesian Coal Basin under difficult geological and mining conditions. The seismic source parameters of the provoked tremors were calculated. The presented preliminary investigations enable a rapid estimation of the destress blasting effectiveness using seismic source parameters, but further analysis in other geological and mining conditions with other blasting parameters is required.

  7. Simultaneous Position, Velocity, Attitude, Angular Rates, and Surface Parameter Estimation Using Astrometric and Photometric Observations

    DTIC Science & Technology

    2013-07-01

    Additionally, a physically consistent BRDF and radiation pressure model is utilized thus enabling an accurate physical link between the observed... BRDF and radiation pressure model is utilized thus enabling an accurate physical link between the observed photometric brightness and the attitudinal...source and the observer is ( ) VLVLH ˆˆˆˆˆ ++= (2) with angles α and β from N̂ and is used in many analytic BRDF models . There are many

  8. OpenMC In Situ Source Convergence Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less

  9. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  10. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.

    PubMed

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-01

    To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian® On-Board Imager® (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian® OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp (±0.2 mm Al and ±2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 × 5 cm(2) to 40 × 40 cm(2). The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within 2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.

  11. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less

  12. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KV{sub p} Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, JR; Rivard, MJ

    2014-06-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devisemore » the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.« less

  13. A new qualitative acoustic emission parameter based on Shannon's entropy for damage monitoring

    NASA Astrophysics Data System (ADS)

    Chai, Mengyu; Zhang, Zaoxiao; Duan, Quan

    2018-02-01

    An important objective of acoustic emission (AE) non-destructive monitoring is to accurately identify approaching critical damage and to avoid premature failure by means of the evolutions of AE parameters. One major drawback of most parameters such as count and rise time is that they are strongly dependent on the threshold and other settings employed in AE data acquisition system. This may hinder the correct reflection of original waveform generated from AE sources and consequently bring difficulty for the accurate identification of the critical damage and early failure. In this investigation, a new qualitative AE parameter based on Shannon's entropy, i.e. AE entropy is proposed for damage monitoring. Since it derives from the uncertainty of amplitude distribution of each AE waveform, it is independent of the threshold and other time-driven parameters and can characterize the original micro-structural deformations. Fatigue crack growth test on CrMoV steel and three point bending test on a ductile material are conducted to validate the feasibility and effectiveness of the proposed parameter. The results show that the new parameter, compared to AE amplitude, is more effective in discriminating the different damage stages and identifying the critical damage.

  14. Correlated flux densities from VLBI observations with the DSN

    NASA Technical Reports Server (NTRS)

    Coker, R. F.

    1992-01-01

    Correlated flux densities of extragalactic radio sources in the very long baseline interferometry (VLBI) astrometric catalog are required for the VLBI tracking of Galileo, Mars Observer, and future missions. A system to produce correlated and total flux density catalogs was developed to meet these requirements. A correlated flux density catalog of 274 sources, accurate to about 20 percent, was derived from more than 5000 DSN VLBI observations at 2.3 GHz (S-band) and 8.4 GHz (X-band) using 43 VLBI radio reference frame experiments during the period 1989-1992. Various consistency checks were carried out to ensure the accuracy of the correlated flux densities. All observations were made on the California-Spain and California-Australia DSN baselines using the Mark 3 wideband data acquisition system. A total flux density catalog, accurate to about 20 percent, with data on 150 sources, was also created. Together, these catalogs can be used to predict source strengths to assist in the scheduling of VLBI tracking passes. In addition, for those sources with sufficient observations, a rough estimate of source structure parameters can be made.

  15. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  16. Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za

    2013-11-01

    Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less

  17. Effect of Different Solar Radiation Data Sources on the Variation of Techno-Economic Feasibility of PV Power System

    NASA Astrophysics Data System (ADS)

    Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Aljaafar, A. A.; Kadhim, Mohammed; Sopian, K.

    2017-11-01

    The aim of this study is to evaluate the variation in techno-economic feasibility of PV power system under different data sources of solar radiation. HOMER simulation tool is used to predict the techno-economic feasibility parameters of PV power system in Baghdad city, Iraq located at (33.3128° N, 44.3615° E) as a case study. Four data sources of solar radiation, different annual capacity shortages percentage (0, 2.5, 5, and 7.5), and wide range of daily load profile (10-100 kWh/day) are implemented. The analyzed parameters of the techno-economic feasibility are COE (/kWh), PV array power capacity (kW), PV electrical production (kWh/year), No. of batteries and battery lifetime (year). The main results of the study revealed the followings: (1) solar radiation from different data sources caused observed to significant variation in the values of the techno-economic feasibility parameters; therefore, careful attention must be paid to ensure the use of an accurate solar input data; (2) Average solar radiation from different data sources can be recommended as a reasonable input data; (3) it is observed that as the size and of PV power system increases, the effect of different data sources of solar radiation increases and causes significant variation in the values of the techno-economic feasibility parameters.

  18. Deterministic Tectonic Origin Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas

    NASA Astrophysics Data System (ADS)

    Necmioglu, O.; Meral Ozel, N.

    2014-12-01

    Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.

  19. Laser magnetic resonance in supersonic plasmas - The rotational spectrum of SH(+)

    NASA Technical Reports Server (NTRS)

    Hovde, David C.; Saykally, Richard J.

    1987-01-01

    The rotational spectrum of v = 0 and v = 1X3Sigma(-)SH(+) was measured by laser magnetic resonance. Rotationally cold (Tr = 30 K), vibrationally excited (Tv = 3000 K) ions were generated in a corona excited supersonic expansion. The use of this source to identify ion signals is described. Improved molecular parameters were obtained; term values are presented from which astrophysically important transitions may be calculated. Accurate hyperfine parameters for both vibrational levels were determined and the vibrational dependence of the Fermi contact interaction was resolved. The hyperfine parameters agree well with recent many-body perturbation theory calculations.

  20. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  1. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  2. Line shape parameters of the 22-GHz water line for accurate modeling in atmospheric applications

    NASA Astrophysics Data System (ADS)

    Koshelev, M. A.; Golubiatnikov, G. Yu.; Vilkov, I. N.; Tretyakov, M. Yu.

    2018-01-01

    The paper concerns refining parameters of one of the major atmospheric diagnostic lines of water vapor at 22 GHz. Two high resolution microwave spectrometers based on different principles of operation covering together the pressure range from a few milliTorr up to a few Torr were used. Special efforts were made to minimize possible sources of systematic measurement errors. Satisfactory self-consistency of the obtained data was achieved ensuring reliability of the obtained parameters. Collisional broadening and shifting parameters of the line in pure water vapor and in its mixture with air were determined at room temperature. Comparative analysis of the obtained parameters with previous data is given. The speed dependence effect impact on the line shape was evaluated.

  3. Improving the geological interpretation of magnetic and gravity satellite anomalies

    NASA Technical Reports Server (NTRS)

    Hinze, William J.; Braile, Lawrence W.; Vonfrese, Ralph R. B.

    1987-01-01

    Quantitative analysis of the geologic component of observed satellite magnetic and gravity fields requires accurate isolation of the geologic component of the observations, theoretically sound and viable inversion techniques, and integration of collateral, constraining geologic and geophysical data. A number of significant contributions were made which make quantitative analysis more accurate. These include procedures for: screening and processing orbital data for lithospheric signals based on signal repeatability and wavelength analysis; producing accurate gridded anomaly values at constant elevations from the orbital data by three-dimensional least squares collocation; increasing the stability of equivalent point source inversion and criteria for the selection of the optimum damping parameter; enhancing inversion techniques through an iterative procedure based on the superposition theorem of potential fields; and modeling efficiently regional-scale lithospheric sources of satellite magnetic anomalies. In addition, these techniques were utilized to investigate regional anomaly sources of North and South America and India and to provide constraints to continental reconstruction. Since the inception of this research study, eleven papers were presented with associated published abstracts, three theses were completed, four papers were published or accepted for publication, and an additional manuscript was submitted for publication.

  4. Blood: Tests Used to Assess the Physiological and Immunological Properties of Blood

    ERIC Educational Resources Information Center

    Quinn, J. G.; Tansey, E. A.; Johnson, C. D.; Roe, S. M.; Montgomery, L. E. A.

    2016-01-01

    The properties of blood and the relative ease of access to which it can be retrieved make it an ideal source to gauge different aspects of homeostasis within an individual, form an accurate diagnosis, and formulate an appropriate treatment regime. Tests used to determine blood parameters such as the erythrocyte sedimentation rate, hemoglobin…

  5. Estimation of Source and Attenuation Parameters from Ground Motion Observations for Induced Seismicity in Alberta

    NASA Astrophysics Data System (ADS)

    Novakovic, M.; Atkinson, G. M.

    2015-12-01

    We use a generalized inversion to solve for site response, regional source and attenuation parameters, in order to define a region-specific ground-motion prediction equation (GMPE) from ground motion observations in Alberta, following the method of Atkinson et al. (2015 BSSA). The database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at ~50 regional stations (distances from 30 to 500 km), over the last few years; almost all of the events have been identified as being induced by oil and gas activity. We remove magnitude scaling and geometric spreading functions from observed ground motions and invert for stress parameter, regional attenuation and site amplification. Resolving these parameters allows for the derivation of a regionally-calibrated GMPE that can be used to accurately predict amplitudes across the region in real time, which is useful for ground-motion-based alerting systems and traffic light protocols. The derived GMPE has further applications for the evaluation of hazards from induced seismicity.

  6. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  8. Estimation of Nutation Time Constant Model Parameters for On-Axis Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Sudermann, James

    2008-01-01

    Calculating an accurate nutation time constant for a spinning spacecraft is an important step for ensuring mission success. Spacecraft nutation is caused by energy dissipation about the spin axis. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and can be simulated using a forced motion spin table. Mechanical analogs, such as pendulums and rotors, are typically used to simulate propellant slosh. A strong desire exists for an automated method to determine these analog parameters. The method presented accomplishes this task by using a MATLAB Simulink/SimMechanics based simulation that utilizes the Parameter Estimation Tool.

  9. Awakening the BALROG: BAyesian Location Reconstruction Of GRBs

    NASA Astrophysics Data System (ADS)

    Burgess, J. Michael; Yu, Hoi-Fung; Greiner, Jochen; Mortlock, Daniel J.

    2018-05-01

    The accurate spatial location of gamma-ray bursts (GRBs) is crucial for both accurately characterizing their spectra and follow-up observations by other instruments. The Fermi Gamma-ray Burst Monitor (GBM) has the largest field of view for detecting GRBs as it views the entire unocculted sky, but as a non-imaging instrument it relies on the relative count rates observed in each of its 14 detectors to localize transients. Improving its ability to accurately locate GRBs and other transients is vital to the paradigm of multimessenger astronomy, including the electromagnetic follow-up of gravitational wave signals. Here we present the BAyesian Location Reconstruction Of GRBs (BALROG) method for localizing and characterizing GBM transients. Our approach eliminates the systematics of previous approaches by simultaneously fitting for the location and spectrum of a source. It also correctly incorporates the uncertainties in the location of a transient into the spectral parameters and produces reliable positional uncertainties for both well-localized sources and those for which the GBM data cannot effectively constrain the position. While computationally expensive, BALROG can be implemented to enable quick follow-up of all GBM transient signals. Also, we identify possible response problems that require attention and caution when using standard, public GBM detector response matrices. Finally, we examine the effects of including the uncertainty in location on the spectral parameters of GRB 080916C. We find that spectral parameters change and no extra components are required when these effects are included in contrast to when we use a fixed location. This finding has the potential to alter both the GRB spectral catalogues and the reported spectral composition of some well-known GRBs.

  10. Modeling noisy resonant system response

    NASA Astrophysics Data System (ADS)

    Weber, Patrick Thomas; Walrath, David Edwin

    2017-02-01

    In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.

  11. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.

  12. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  13. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  14. A new, high-precision measurement of the X-ray Cu K α spectrum

    NASA Astrophysics Data System (ADS)

    Mendenhall, Marcus H.; Cline, James P.; Henins, Albert; Hudson, Lawrence T.; Szabo, Csilla I.; Windover, Donald

    2016-03-01

    One of the primary measurement issues addressed with NIST Standard Reference Materials (SRMs) for powder diffraction is that of line position. SRMs for this purpose are certified with respect to lattice parameter, traceable to the SI through precise measurement of the emission spectrum of the X-ray source. Therefore, accurate characterization of the emission spectrum is critical to a minimization of the error bounds on the certified parameters. The presently accepted sources for the SI traceable characterization of the Cu K α emission spectrum are those of Härtwig, Hölzer et al., published in the 1990s. The structure of the X-ray emission lines of the Cu K α complex has been remeasured on a newly commissioned double-crystal instrument, with six-bounce Si (440) optics, in a manner directly traceable to the SI definition of the meter. In this measurement, the entire region from 8020 eV to 8100 eV has been covered with a highly precise angular scale and well-defined system efficiency, providing accurate wavelengths and relative intensities. This measurement is in modest disagreement with reference values for the wavelength of the Kα1 line, and strong disagreement for the wavelength of the Kα2 line.

  15. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    PubMed

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  16. Effect of photon energy spectrum on dosimetric parameters of brachytherapy sources.

    PubMed

    Ghorbani, Mahdi; Mehrpouyan, Mohammad; Davenport, David; Ahmadi Moghaddas, Toktam

    2016-06-01

    The aim of this study is to quantify the influence of the photon energy spectrum of brachytherapy sources on task group No. 43 (TG-43) dosimetric parameters. Different photon spectra are used for a specific radionuclide in Monte Carlo simulations of brachytherapy sources. MCNPX code was used to simulate 125I, 103Pd, 169Yb, and 192Ir brachytherapy sources. Air kerma strength per activity, dose rate constant, radial dose function, and two dimensional (2D) anisotropy functions were calculated and isodose curves were plotted for three different photon energy spectra. The references for photon energy spectra were: published papers, Lawrence Berkeley National Laboratory (LBNL), and National Nuclear Data Center (NNDC). The data calculated by these photon energy spectra were compared. Dose rate constant values showed a maximum difference of 24.07% for 103Pd source with different photon energy spectra. Radial dose function values based on different spectra were relatively the same. 2D anisotropy function values showed minor differences in most of distances and angles. There was not any detectable difference between the isodose contours. Dosimetric parameters obtained with different photon spectra were relatively the same, however it is suggested that more accurate and updated photon energy spectra be used in Monte Carlo simulations. This would allow for calculation of reliable dosimetric data for source modeling and calculation in brachytherapy treatment planning systems.

  17. Effect of photon energy spectrum on dosimetric parameters of brachytherapy sources

    PubMed Central

    Ghorbani, Mahdi; Davenport, David

    2016-01-01

    Abstract Aim The aim of this study is to quantify the influence of the photon energy spectrum of brachytherapy sources on task group No. 43 (TG-43) dosimetric parameters. Background Different photon spectra are used for a specific radionuclide in Monte Carlo simulations of brachytherapy sources. Materials and methods MCNPX code was used to simulate 125I, 103Pd, 169Yb, and 192Ir brachytherapy sources. Air kerma strength per activity, dose rate constant, radial dose function, and two dimensional (2D) anisotropy functions were calculated and isodose curves were plotted for three different photon energy spectra. The references for photon energy spectra were: published papers, Lawrence Berkeley National Laboratory (LBNL), and National Nuclear Data Center (NNDC). The data calculated by these photon energy spectra were compared. Results Dose rate constant values showed a maximum difference of 24.07% for 103Pd source with different photon energy spectra. Radial dose function values based on different spectra were relatively the same. 2D anisotropy function values showed minor differences in most of distances and angles. There was not any detectable difference between the isodose contours. Conclusions Dosimetric parameters obtained with different photon spectra were relatively the same, however it is suggested that more accurate and updated photon energy spectra be used in Monte Carlo simulations. This would allow for calculation of reliable dosimetric data for source modeling and calculation in brachytherapy treatment planning systems. PMID:27247558

  18. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  19. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  20. Combining Spitzer Parallax and Keck II Adaptive Optics Imaging to Measure the Mass of a Solar-like Star Orbited by a Cold Gaseous Planet Discovered by Microlensing

    NASA Astrophysics Data System (ADS)

    Beaulieu, J.-P.; Batista, V.; Bennett, D. P.; Marquette, J.-B.; Blackman, J. W.; Cole, A. A.; Coutures, C.; Danielski, C.; Dominis Prester, D.; Donatowicz, J.; Fukui, A.; Koshimoto, N.; Lončarić, K.; Morales, J. C.; Sumi, T.; Suzuki, D.; Henderson, C.; Shvartzvald, Y.; Beichman, C.

    2018-02-01

    To obtain accurate mass measurements for cold planets discovered by microlensing, it is usually necessary to combine light curve modeling with at least two lens mass–distance relations. The physical parameters of the planetary system OGLE-2014-BLG-0124L have been constrained thanks to accurate parallax effect between ground-based and simultaneous space-based Spitzer observations. Here, we resolved the source+lens star from sub-arcsecond blends in H-band using adaptive optics (AO) observations with NIRC2 mounted on Keck II telescope. We identify additional flux, coincident with the source to within 160 mas. We estimate the potential contributions to this blended light (chance-aligned star, additional companion to the lens or to the source) and find that 85% of the NIR flux is due to the lens star at H L = 16.63 ± 0.06 and K L = 16.44 ± 0.06. We combined the parallax constraint and the AO constraint to derive the physical parameters of the system. The lensing system is composed of a mid-late type G main sequence star of M L = 0.9 ± 0.05 M ⊙ located at D L = 3.5 ± 0.2 kpc in the Galactic disk. Taking the mass ratio and projected separation from the original study leads to a planet of M p = 0.65 ± 0.044 M Jupiter at 3.48 ± 0.22 au. Excellent parallax measurements from simultaneous ground-space observations have been obtained on the microlensing event OGLE-2014-BLG-0124, but it is only when they are combined with ∼30 minutes of Keck II AO observations that the physical parameters of the host star are well measured.

  1. Moment tensor inversion with three-dimensional sensor configuration of mining induced seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-06-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). A stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double-couple and full moment tensor with high frequency data, is very challenging. Moreover, the application to underground mining system requires accounting for the 3-D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3-D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in the presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to eight events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double-couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip and rake configurations of the double-couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  2. Moment Tensor Inversion with 3D sensor configuration of Mining Induced Seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-03-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). Stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double couple and full moment tensor with high frequency data is very challenging. Moreover, the application to underground mining system requires accounting for the 3D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to 8 events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip, rake configurations of the double couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  3. Life and Death Near Zero: The distribution and evolution of NEA orbits of near-zero MOID, (e, i), and q

    NASA Astrophysics Data System (ADS)

    Harris, Alan W.; Morbidelli, Alessandro; Granvik, Mikael

    2016-10-01

    Modeling the distribution of orbits with near-zero orbital parameters requires special attention to the dimensionality of the parameters in question. This is even more true since orbits of near-zero MOID, (e, i), or q are especially interesting as sources or sinks of NEAs. An essentially zero value of MOID (Minimum Orbital Intersection Distance) with respect to the Earth's orbit is a requirement for an impact trajectory, and initially also for ejecta from lunar impacts into heliocentric orbits. The collision cross section of the Earth goes up greatly with decreasing relative encounter velocity, venc, thus the impact flux onto the Earth is enhanced in such low-venc objects, which correspond to near-zero (e,i) orbits. And lunar ejecta that escapes from the Earth-moon system mostly does so at only barely greater than minimum velocity for escape (Gladman, et al., 1995, Icarus 118, 302-321), so the Earth-moon system is both a source and a sink of such low-venc orbits, and understanding the evolution of these populations requires accurately modeling the orbit distributions. Lastly, orbits of very low heliocentric perihelion distance, q, are particularly interesting as a "sink" in the NEA population as asteroids "fall into the sun" (Farinella, et al., 1994, Nature 371, 314-317). Understanding this process, and especially the role of disintegration of small asteroids as they evolve into low-q orbits (Granvik et al., 2016, Nature 530, 303-306), requires accurate modeling of the q distribution that would exist in the absence of a "sink" in the distribution. In this paper, we derive analytical expressions for the expected steady-state distributions near zero of MOID, (e,i), and q in the absence of sources or sinks, compare those to numerical simulations of orbit distributions, and lastly evaluate the distributions of discovered NEAs to try to understand the sources and sinks of NEAs "near zero" of these orbital parameters.

  4. Aerosol Retrievals over the Ocean using Channel 1 and 2 AVHRR Data: A Sensitivity Analysis and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.

    1999-01-01

    This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.

  5. A new lumped-parameter model for flow in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.

    A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less

  6. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico

  7. As above, so below? Towards understanding inverse models in BCI

    NASA Astrophysics Data System (ADS)

    Lindgren, Jussi T.

    2018-02-01

    Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.

  8. The net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR.

    PubMed

    van de Geijn, J; Fraass, B A

    1984-01-01

    The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.

  9. Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van de Geijn, J.; Fraass, B.A.

    The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less

  10. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  11. Single source photoplethysmograph transducer for local pulse wave velocity measurement.

    PubMed

    Nabeel, P M; Joseph, Jayaraj; Awasthi, Vartika; Sivaprakasam, Mohanasankar

    2016-08-01

    Cuffless evaluation of arterial blood pressure (BP) using pulse wave velocity (PWV) has received attraction over the years. Local PWV based techniques for cuffless BP measurement has more potential in accurate estimation of BP parameters. In this work, we present the design and experimental validation of a novel single-source Photoplethysmograph (PPG) transducer for arterial blood pulse detection and cycle-to-cycle local PWV measurement. The ability of the transducer to continuously measure local PWV was verified using arterial flow phantom as well as by conducting an in-vivo study on 17 volunteers. The single-source PPG transducer could reliably acquire dual blood pulse waveforms, along small artery sections of length less than 28 mm. The transducer was able to perform repeatable measurements of carotid local PWV on multiple subjects with maximum beat-to-beat variation less than 12%. The correlation between measured carotid local PWV and brachial BP parameters were also investigated during the in-vivo study. Study results prove the potential use of newly proposed single-source PPG transducers in continuous cuffless BP measurement systems.

  12. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  13. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects

    PubMed Central

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446

  14. Monte Carlo simulation of the full energy peak efficiency of an HPGe detector.

    PubMed

    Khan, Waseem; Zhang, Qingmin; He, Chaohui; Saleh, Muhammad

    2018-01-01

    This paper presents a Monte Carlo method to obtain the full energy peak efficiency (FEPE) curve for a High Purity Germanium (HPGe) detector, as it is difficult and time-consuming to measure the FEPE curve experimentally. The Geant4 simulation toolkit was adopted to establish a detector model since detector specifications provided by the nominal manufacturer are usually insufficient to calculate the accurate efficiency of a detector. Several detector parameters were optimized. FEPE curves for a given HPGe detectors over the energy range of 59.50-1836keV were obtained and showed good agreements with those measured experimentally. FEPE dependences on detector parameters and source-detector distances were investigated. A best agreement with experimental result was achieved for a certain detector geometry and source-detector distance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  16. Minimization of nanosatellite low frequency magnetic fields.

    PubMed

    Belyayev, S M; Dudkin, F L

    2016-03-01

    Small weight and dimensions of the micro- and nanosatellites constrain researchers to place electromagnetic sensors on short booms or on the satellite body. Therefore the electromagnetic cleanliness of such satellites becomes a central question. This paper describes the theoretical base and practical techniques for determining the parameters of DC and very low frequency magnetic interference sources. One of such sources is satellite magnetization, the reduction of which improves the accuracy and stability of the attitude control system. We present design solutions for magnetically clean spacecraft, testing equipment, and technology for magnetic moment measurements, which are more convenient, efficient, and accurate than the conventional ones.

  17. Method and system for producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Folta, James A.; Montcalm, Claude; Walton, Christopher

    2003-01-01

    A method and system for producing a thin film with highly uniform (or highly accurate custom graded) thickness on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source with controlled (and generally, time-varying) velocity. In preferred embodiments, the method includes the steps of measuring the source flux distribution (using a test piece that is held stationary while exposed to the source), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of sweep velocity modulation recipes, and determining from the predicted film thickness profiles a sweep velocity modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a practical method of accurately measuring source flux distribution, and a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal sweep velocity modulation recipe to achieve a desired thickness profile on a substrate. Preferably, the computer implements an algorithm in which many sweep velocity function parameters (for example, the speed at which each substrate spins about its center as it sweeps across the source) can be varied or set to zero.

  18. Fire tests for airplane interior materials

    NASA Technical Reports Server (NTRS)

    Tustin, E. A.

    1980-01-01

    Large scale, simulated fire tests of aircraft interior materials were carried out in salvaged airliner fuselage. Two "design" fire sources were selected: Jet A fuel ignited in fuselage midsection and trash bag fire. Comparison with six established laboratory fire tests show that some laboratory tests can rank materials according to heat and smoke production, but existing tests do not characterize toxic gas emissions accurately. Report includes test parameters and test details.

  19. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  20. Constraints on the ^22Ne(α,n)^25Mg reaction rate from ^natMg+n Total and ^25Mg(n,γ ) Cross Sections

    NASA Astrophysics Data System (ADS)

    Koehler, Paul

    2002-10-01

    The ^22Ne(α,n)^25Mg reaction is the neutron source during the s process in massive and intermediate mass stars as well as a secondary neutron source during the s process in low mass stars. Therefore, an accurate determination of this rate is important for a better understanding of the origin of nuclides heavier than iron as well as for improving s-process models. Also, because the s process produces seed nuclides for a later p process in massive stars, an accurate value for this rate is important for a better understanding of the p process. Because the lowest observed resonance in direct ^22Ne(α,n)^25Mg measurements is considerably above the most important energy range for s-process temperatures, the uncertainty in this rate is dominated by the poorly known properties of states in ^26Mg between this resonance and threshold. Neutron measurements can observe these states with much better sensitivity and determine their parameters much more accurately than direct ^22Ne(α,n)^25Mg measurements. I have analyzed previously reported Mg+n total and ^25Mg(n,γ ) cross sections to obtain a much improved set of resonance parameters for states in ^26Mg in this region, and an improved estimate of the uncertainty in the ^22Ne(α,n)^25Mg reaction rate. This work was supported by the U.S. DOE under contract No. DE-AC05-00OR22725 with UT-Battell, LLC.

  1. SU-F-T-54: Determination of the AAPM TG-43 Brachytherapy Dosimetry Parameters for A New Titanium-Encapsulated Yb-169 Source by Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Washington University School of Medicine, St. Louis, MO; Munro, J

    2016-06-15

    Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parametersmore » for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.« less

  2. Physics of compact nonthermal sources. III - Energetic considerations. [electron synchrotron radiation

    NASA Technical Reports Server (NTRS)

    Burbidge, G. R.; Jones, T. W.; Odell, S. L.

    1974-01-01

    The energy content of the compact incoherent electron-synchrotron sources 3C 84, 3C 120, 3C 273, 3C 279, 3C 454.3, CTA 102, 3C 446, PKS 2134+004, VRO 42.22.01 and OJ 287 is calculated on the assumption that the low-frequency turnovers in the radio spectrum are due to self-absorption and that the electron distribution is isotropic. The dependence of the source parameters on various modifications of the standard assumptions is determined. These involve relativistic motions, alternate explanations for the low-frequency turnover, proton-synchrotron radiation, and distance to the source. The canonical interpretation is found to be accurate in many respects; some of the difficulties and ways of dealing with them are discussed in detail.

  3. Configuration of electro-optic fire source detection system

    NASA Astrophysics Data System (ADS)

    Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir

    2007-04-01

    The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.

  4. Beyond seismic interferometry: imaging the earth's interior with virtual sources and receivers inside the earth

    NASA Astrophysics Data System (ADS)

    Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.

    2015-12-01

    Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.

  5. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  6. A rapid and accurate method, ventilated chamber C-history method, of measuring the emission characteristic parameters of formaldehyde/VOCs in building materials.

    PubMed

    Huang, Shaodan; Xiong, Jianyin; Zhang, Yinping

    2013-10-15

    The indoor pollution caused by formaldehyde and volatile organic compounds (VOCs) emitted from building materials poses an adverse effect on people's health. It is necessary to understand and control the behaviors of the emission sources. Based on detailed mass transfer analysis on the emission process in a ventilated chamber, this paper proposes a novel method of measuring the three emission characteristic parameters, i.e., the initial emittable concentration, the diffusion coefficient and the partition coefficient. A linear correlation between the logarithm of dimensionless concentration and time is derived. The three parameters can then be calculated from the intercept and slope of the correlation. Compared with the closed chamber C-history method, the test is performed under ventilated condition thus some commonly-used measurement instruments (e.g., GC/MS, HPLC) can be applied. While compared with other methods, the present method can rapidly and accurately measure the three parameters, with experimental time less than 12h and R(2) ranging from 0.96 to 0.99 for the cases studied. Independent experiment was carried out to validate the developed method, and good agreement was observed between the simulations based on the determined parameters and experiments. The present method should prove useful for quick characterization of formaldehyde/VOC emissions from indoor materials. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  8. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    NASA Astrophysics Data System (ADS)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.

  9. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  10. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  11. Interpretation of Source Parameters from Total Gradient of Gravity and Magnetic Anomalies Caused by Thin Dyke using Nonlinear Global Optimization Technique

    NASA Astrophysics Data System (ADS)

    Biswas, A.

    2016-12-01

    A proficient way to deal with appraisal model parameters from total gradient of gravity and magnetic data in light of Very Fast Simulated Annealing (VFSA) has been exhibited. This is the first run through of applying VFSA in deciphering total gradient of potential field information with another detailing estimation brought on because of detached causative sources installed in the subsurface. The model parameters translated here are the amplitude coefficient (k), accurate origin of causative source (x0) depth (z0) and the shape factor (q). The outcome of VFSA improvement demonstrates that it can exceptionally decide all the model parameters when shape variable is fixed. The model parameters assessed by the present strategy, for the most part the shape and depth of the covered structures was observed to be in astounding concurrence with the genuine parameters. The technique has likewise the capability of dodging very uproarious information focuses and enhances the understanding results. Investigation of Histogram and cross-plot examination likewise proposes the translation inside the assessed ambiguity. Inversion of noise-free and noisy synthetic data information for single structures and field information shows the viability of the methodology. The procedure has been carefully and adequately connected to genuine field cases (Leona Anomaly, Senegal for gravity and Pima copper deposit, USA for magnetic) with the nearness of mineral bodies. The present technique can be to a great degree material for mineral investigation or ore bodies of dyke-like structure rooted in the shallow and more deep subsurface. The calculation time for the entire procedure is short.

  12. Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.

    PubMed

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios

    2016-09-15

    Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.

  13. RFI and Remote Sensing of the Earth from Space

    NASA Technical Reports Server (NTRS)

    Le Vine, D. M.; Johnson, J. T.; Piepmeier, J.

    2016-01-01

    Passive microwave remote sensing of the Earth from space provides information essential for understanding the Earth's environment and its evolution. Parameters such as soil moisture, sea surface temperature and salinity, and profiles of atmospheric temperature and humidity are measured at frequencies determined by the physics (e.g. sensitivity to changes in desired parameters) and by the availability of suitable spectrum free from interference. Interference from manmade sources (radio frequency interference) is an impediment that in many cases limits the potential for accurate measurements from space. A review is presented here of the frequencies employed in passive microwave remote sensing of the Earth from space and the associated experience with RFI.

  14. An improved method to estimate reflectance parameters for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  15. Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites

    DTIC Science & Technology

    2010-01-01

    and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry

  16. A deterministic (non-stochastic) low frequency method for geoacoustic inversion.

    PubMed

    Tolstoy, A

    2010-06-01

    It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.

  17. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  18. Amplitude loss of sonic waveform due to source coupling to the medium

    NASA Astrophysics Data System (ADS)

    Lee, Myung W.; Waite, William F.

    2007-03-01

    In contrast to hydrate-free sediments, sonic waveforms acquired in gas hydrate-bearing sediments indicate strong amplitude attenuation associated with a sonic velocity increase. The amplitude attenuation increase has been used to quantify pore-space hydrate content by attributing observed attenuation to the hydrate-bearing sediment's intrinsic attenuation. A second attenuation mechanism must be considered, however. Theoretically, energy radiation from sources inside fluid-filled boreholes strongly depends on the elastic parameters of materials surrounding the borehole. It is therefore plausible to interpret amplitude loss in terms of source coupling to the surrounding medium as well as to intrinsic attenuation. Analyses of sonic waveforms from the Mallik 5L-38 well, Northwest Territories, Canada, indicate a significant component of sonic waveform amplitude loss is due to source coupling. Accordingly, all sonic waveform amplitude analyses should include the effect of source coupling to accurately characterize a formation's intrinsic attenuation.

  19. Amplitude loss of sonic waveform due to source coupling to the medium

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2007-01-01

    In contrast to hydrate-free sediments, sonic waveforms acquired in gas hydrate-bearing sediments indicate strong amplitude attenuation associated with a sonic velocity increase. The amplitude attenuation increase has been used to quantify pore-space hydrate content by attributing observed attenuation to the hydrate-bearing sediment's intrinsic attenuation. A second attenuation mechanism must be considered, however. Theoretically, energy radiation from sources inside fluid-filled boreholes strongly depends on the elastic parameters of materials surrounding the borehole. It is therefore plausible to interpret amplitude loss in terms of source coupling to the surrounding medium as well as to intrinsic attenuation. Analyses of sonic waveforms from the Mallik 5L-38 well, Northwest Territories, Canada, indicate a significant component of sonic waveform amplitude loss is due to source coupling. Accordingly, all sonic waveform amplitude analyses should include the effect of source coupling to accurately characterize a formation's intrinsic attenuation.

  20. Numerical modeling of the SNS H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan

    Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved inmore » order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less

  1. Transforming Our Understanding of the X-ray Universe: The Imaging X-ray Polarimeter Explorer (IXPE)

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; Bellazzini, Ronaldo; Costa, Enrico; Matt, Giorgio; Marshall, Herman; ODell, Stephen L.; Pavlov, George; Ramsey, Brian; Romani, Roger

    2014-01-01

    Accurate X-ray polarimetry can provide unique information on high-energy-astrophysical processes and sources. As there have been no meaningful X-ray polarization measurements of cosmic sources since our pioneering work in the 1970's, the time is ripe to explore this new parameter space in X-ray astronomy. To accomplish this requires a well-calibrated and well understood system that-particularly for an Explorer mission-has technical, cost, and schedule credibility. The system that we shall present satisfies these conditions, being based upon completely calibrated imaging- and polarization-sensitive detectors and proven X-ray-telescope technology.

  2. Engineering light emission of two-dimensional materials in both the weak and strong coupling regimes

    NASA Astrophysics Data System (ADS)

    Brotons-Gisbert, Mauro; Martínez-Pastor, Juan P.; Ballesteros, Guillem C.; Gerardot, Brian D.; Sánchez-Royo, Juan F.

    2018-01-01

    Two-dimensional (2D) materials have promising applications in optoelectronics, photonics, and quantum technologies. However, their intrinsically low light absorption limits their performance, and potential devices must be accurately engineered for optimal operation. Here, we apply a transfer matrix-based source-term method to optimize light absorption and emission in 2D materials and related devices in weak and strong coupling regimes. The implemented analytical model accurately accounts for experimental results reported for representative 2D materials such as graphene and MoS2. The model has been extended to propose structures to optimize light emission by exciton recombination in MoS2 single layers, light extraction from arbitrarily oriented dipole monolayers, and single-photon emission in 2D materials. Also, it has been successfully applied to retrieve exciton-cavity interaction parameters from MoS2 microcavity experiments. The present model appears as a powerful and versatile tool for the design of new optoelectronic devices based on 2D semiconductors such as quantum light sources and polariton lasers.

  3. Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran

    2011-01-01

    A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.

  4. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  5. Numerical investigation and electro-acoustic modeling of measurement methods for the in-duct acoustical source parameters.

    PubMed

    Jang, Seung-Ho; Ih, Jeong-Guon

    2003-02-01

    It is known that the direct method yields different results from the indirect (or load) method in measuring the in-duct acoustic source parameters of fluid machines. The load method usually comes up with a negative source resistance, although a fairly accurate prediction of radiated noise can be obtained from any method. This study is focused on the effect of the time-varying nature of fluid machines on the output results of two typical measurement methods. For this purpose, a simplified fluid machine consisting of a reservoir, a valve, and an exhaust pipe is considered as representing a typical periodic, time-varying system and the measurement situations are simulated by using the method of characteristics. The equivalent circuits for such simulations are also analyzed by considering the system as having a linear time-varying source. It is found that the results from the load method are quite sensitive to the change of cylinder pressure or valve profile, in contrast to those from the direct method. In the load method, the source admittance turns out to be predominantly dependent on the valve admittance at the calculation frequency as well as the valve and load admittances at other frequencies. In the direct method, however, the source resistance is always positive and the source admittance depends mainly upon the zeroth order of valve admittance.

  6. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  7. New developments in the McStas neutron instrument simulation package

    NASA Astrophysics Data System (ADS)

    Willendrup, P. K.; Knudsen, E. B.; Klinkby, E.; Nielsen, T.; Farhi, E.; Filges, U.; Lefmann, K.

    2014-07-01

    The McStas neutron ray-tracing software package is a versatile tool for building accurate simulators of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as a scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics.

  8. White-light Interferometry using a Channeled Spectrum: II. Calibration Methods, Numerical and Experimental Results

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.

    2007-01-01

    In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.

  9. Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales

    NASA Astrophysics Data System (ADS)

    Madsen, P. T.; Wahlberg, M.

    2007-08-01

    Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital recorders. We outline the properties required of hydrophones, amplifiers and analog to digital converters, and discuss the problems of recording echolocation clicks on the axis of a directional sound beam. For accurate localization the hydrophone array apertures must be adapted and scaled to the behavior of, and the range to, the clicking animal, and precise information on hydrophone locations is critical. We provide examples of localization routines and outline sources of error that lead to uncertainties in localizing clicking animals in time and space. Furthermore we explore approaches to time series analysis of discrete versions of toothed whale clicks that are meaningful in a biosonar context.

  10. Astrophysics to z approx. 10 with Gravitational Waves

    NASA Technical Reports Server (NTRS)

    Stebbins, Robin; Hughes, Scott; Lang, Ryan

    2007-01-01

    The most useful characterization of a gravitational wave detector's performance is the accuracy with which astrophysical parameters of potential gravitational wave sources can be estimated. One of the most important source types for the Laser Interferometer Space Antenna (LISA) is inspiraling binaries of black holes. LISA can measure mass and spin to better than 1% for a wide range of masses, even out to high redshifts. The most difficult parameter to estimate accurately is almost always luminosity distance. Nonetheless, LISA can measure luminosity distance of intermediate-mass black hole binary systems (total mass approx.10(exp 4) solar mass) out to z approx.10 with distance accuracies approaching 25% in many cases. With this performance, LISA will be able to follow the merger history of black holes from the earliest mergers of proto-galaxies to the present. LISA's performance as a function of mass from 1 to 10(exp 7) solar mass and of redshift out to z approx. 30 will be described. The re-formulation of LISA's science requirements based on an instrument sensitivity model and parameter estimation will be described.

  11. Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.

  12. Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, Madeline Louise; McMath, Garrett Earl

    Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less

  13. Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries

    DOE PAGES

    Lockhart, Madeline Louise; McMath, Garrett Earl

    2017-10-26

    Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less

  14. Spatiotemporal dynamics in excitable homogeneous random networks composed of periodically self-sustained oscillation.

    PubMed

    Qian, Yu; Liu, Fei; Yang, Keli; Zhang, Ge; Yao, Chenggui; Ma, Jun

    2017-09-19

    The collective behaviors of networks are often dependent on the network connections and bifurcation parameters, also the local kinetics plays an important role in contributing the consensus of coupled oscillators. In this paper, we systematically investigate the influence of network structures and system parameters on the spatiotemporal dynamics in excitable homogeneous random networks (EHRNs) composed of periodically self-sustained oscillation (PSO). By using the dominant phase-advanced driving (DPAD) method, the one-dimensional (1D) Winfree loop is exposed as the oscillation source supporting the PSO, and the accurate wave propagation pathways from the oscillation source to the whole network are uncovered. Then, an order parameter is introduced to quantitatively study the influence of network structures and system parameters on the spatiotemporal dynamics of PSO in EHRNs. Distinct results induced by the network structures and the system parameters are observed. Importantly, the corresponding mechanisms are revealed. PSO influenced by the network structures are induced not only by the change of average path length (APL) of network, but also by the invasion of 1D Winfree loop from the outside linking nodes. Moreover, PSO influenced by the system parameters are determined by the excitation threshold and the minimum 1D Winfree loop. Finally, we confirmed that the excitation threshold and the minimum 1D Winfree loop determined PSO will degenerate as the system size is expanded.

  15. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the estimation approach to a simple, accurately modeled system, its effectiveness and accuracy can be evaluated. The same experimental setup can then be used with fluid-filled tanks to further evaluate the effectiveness of the process. Ultimately, the proven process can be applied to the full-sized spinning experimental setup to quickly and accurately determine the slosh model parameters for a particular spacecraft mission. Automating the parameter identification process will save time, allow more changes to be made to proposed designs, and lower the cost in the initial design stages.

  16. Evaluation of an adaptive detector collimation for prospectively ECG-triggered coronary CT angiography with third-generation dual-source CT.

    PubMed

    Messerli, Michael; Dewes, Patricia; Scholtz, Jan-Erik; Arendt, Christophe; Wildermuth, Simon; Vogl, Thomas J; Bauer, Ralf W

    2018-05-01

    To investigate the impact of an adaptive detector collimation on the dose parameters and accurateness of scan length adaption at prospectively ECG-triggered sequential cardiac CT with a wide-detector third-generation dual-source CT. Ideal scan lengths for human hearts were retrospectively derived from 103 triple-rule-out examinations. These measures were entered into the new scanner operated in prospectively ECG-triggered sequential cardiac scan mode with three different detector settings: (1) adaptive collimation, (2) fixed 64 × 0.6-mm collimation, and (3) fixed 96 × 0.6-mm collimation. Differences in effective scan length and deviation from the ideal scan length and dose parameters (CTDIvol, DLP) were documented. The ideal cardiac scan length could be matched by the adaptive collimation in every case while the mean scanned length was longer by 15.4% with the 64 × 0.6 mm and by 27.2% with the fixed 96 × 0.6-mm collimation. While the DLP was almost identical between the adaptive and the 64 × 0.6-mm collimation (83 vs. 89 mGycm at 120 kV), it was 62.7% higher with the 96 × 0.6-mm collimation (135 mGycm), p < 0.001. The adaptive detector collimation for prospectively ECG-triggered sequential acquisition allows for adjusting the scan length as accurate as this can only be achieved with a spiral acquisition. This technique allows keeping patient exposure low where patient dose would significantly increase with the traditional step-and-shoot mode. • Adaptive detector collimation allows keeping patient exposure low in cardiac CT. • With novel detectors the desired scan length can be accurately matched. • Differences in detector settings may cause 62.7% of excessive dose.

  17. Identifying isotropic events using a regional moment tensor inversion

    DOE PAGES

    Ford, Sean R.; Dreger, Douglas S.; Walter, William R.

    2009-01-17

    We calculate the deviatoric and isotropic source components for 17 explosions at the Nevada Test Site, as well as 12 earthquakes and 3 collapses in the surrounding region of the western United States, using a regional time domain full waveform inversion for the complete moment tensor. The events separate into specific populations according to their deviation from a pure double-couple and ratio of isotropic to deviatoric energy. The separation allows for anomalous event identification and discrimination between explosions, earthquakes, and collapses. Confidence regions of the model parameters are estimated from the data misfit by assuming normally distributed parameter values. Wemore » investigate the sensitivity of the resolved parameters of an explosion to imperfect Earth models, inaccurate event depths, and data with low signal-to-noise ratio (SNR) assuming a reasonable azimuthal distribution of stations. In the band of interest (0.02–0.10 Hz) the source-type calculated from complete moment tensor inversion is insensitive to velocity model perturbations that cause less than a half-cycle shift (<5 s) in arrival time error if shifting of the waveforms is allowed. The explosion source-type is insensitive to an incorrect depth assumption (for a true depth of 1 km), and the goodness of fit of the inversion result cannot be used to resolve the true depth of the explosion. Noise degrades the explosive character of the result, and a good fit and accurate result are obtained when the signal-to-noise ratio is greater than 5. We assess the depth and frequency dependence upon the resolved explosive moment. As the depth decreases from 1 km to 200 m, the isotropic moment is no longer accurately resolved and is in error between 50 and 200%. Furthermore, even at the most shallow depth the resultant moment tensor is dominated by the explosive component when the data have a good SNR.« less

  18. HST Imaging of the Eye of Horus, a Double Source Plane Gravitational Lens

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth

    2017-08-01

    Double source plane (DSP) gravitational lenses are extremely rare alignments of a massive lens galaxy with two background sources at distinct redshifts. The presence of two source planes provides important constraints on cosmology and galaxy structure beyond that of typical lens systems by breaking degeneracies between parameters that vary with source redshift. While these systems are extremely valuable, only a handful are known. We have discovered the first DSP lens, the Eye of Horus, in the Hyper Suprime-Cam survey and have confirmed both source redshifts with follow-up spectroscopy, making this the only known DSP lens with both source redshifts measured. Furthermore, the brightest image of the most distant source (S2) is split into a pair of images by a mass component that is undetected in our ground-based data, suggesting the presence of a satellite or line-of-sight galaxy causing this splitting. In order to better understand this system and use it for cosmology and galaxy studies, we must construct an accurate lens model, accounting for the lensing effects of both the main lens galaxy and the intermediate source. Only with deep, high-resolution imaging from HST/ACS can we accurately model this system. Our proposed multiband imaging will clearly separate out the two sources by their distinct colors, allowing us to use their extended surface brightness distributions as constraints on our lens model. These data may also reveal the satellite galaxy responsible for the splitting of the brightest image of S2. With these observations, we will be able to take full advantage of the wealth of information provided by this system.

  19. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  20. Cloud storage based mobile assessment facility for patients with post-traumatic stress disorder using integrated signal processing algorithm

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Basco, Abigail Joy S.; Cabanada, Myla B.; Gonzales, Patrisha Melrose V.; Marasigan, Juan Carlos C.

    2017-06-01

    The research aims to build a tool in assessing patients for post-traumatic stress disorder or PTSD. The parameters used are heart rate, skin conductivity, and facial gestures. Facial gestures are recorded using OpenFace, an open-source face recognition program that uses facial action units in to track facial movements. Heart rate and skin conductivity is measured through sensors operated using Raspberry Pi. Results are stored in a database for easy and quick access. Databases to be used are uploaded to a cloud platform so that doctors have direct access to the data. This research aims to analyze these parameters and give accurate assessment of the patient.

  1. Multi-level emulation of a volcanic ash transport and dispersion model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen

    2018-01-01

    Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.

  2. How sensitive is earthquake ground motion to source parameters? Insights from a numerical study in the Mygdonian basin

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos

    2014-05-01

    Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics

  3. Features of Radiation and Propagation of Seismic Waves in the Northern Caucasus: Manifestations in Regional Coda

    NASA Astrophysics Data System (ADS)

    Kromskii, S. D.; Pavlenko, O. V.; Gabsatarova, I. P.

    2018-03-01

    Based on the Anapa (ANN) seismic station records of 40 earthquakes ( M W > 3.9) that occurred within 300 km of the station since 2002 up to the present time, the source parameters and quality factor of the Earth's crust ( Q( f)) and upper mantle are estimated for the S-waves in the 1-8 Hz frequency band. The regional coda analysis techniques which allow separating the effects associated with seismic source (source effects) and with the propagation path of seismic waves (path effects) are employed. The Q-factor estimates are obtained in the form Q( f) = 90 × f 0.7 for the epicentral distances r < 120 km and in the form Q( f) = 90 × f1.0 for r > 120 km. The established Q( f) and source parameters are close to the estimates for Central Japan, which is probably due to the similar tectonic structure of the regions. The shapes of the source parameters are found to be independent of the magnitude of the earthquakes in the magnitude range 3.9-5.6; however, the radiation of the high-frequency components ( f > 4-5 Hz) is enhanced with the depth of the source (down to h 60 km). The estimates Q( f) of the quality factor determined from the records by the Sochi, Anapa, and Kislovodsk seismic stations allowed a more accurate determination of the seismic moments and magnitudes of the Caucasian earthquakes. The studies will be continued for obtaining the Q( f) estimates, geometrical spreading functions, and frequency-dependent amplification of seismic waves in the Earth's crust in the other regions of the Northern Caucasus.

  4. Effects of waveform model systematics on the interpretation of GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  5. Physical and numerical studies of a fracture system model

    NASA Astrophysics Data System (ADS)

    Piggott, Andrew R.; Elsworth, Derek

    1989-03-01

    Physical and numerical studies of transient flow in a model of discretely fractured rock are presented. The physical model is a thermal analogue to fractured media flow consisting of idealized disc-shaped fractures. The numerical model is used to predict the behavior of the physical model. The use of different insulating materials to encase the physical model allows the effects of differing leakage magnitudes to be examined. A procedure for determining appropriate leakage parameters is documented. These parameters are used in forward analysis to predict the thermal response of the physical model. Knowledge of the leakage parameters and of the temporal variation of boundary conditions are shown to be essential to an accurate prediction. Favorable agreement is illustrated between numerical and physical results. The physical model provides a data source for the benchmarking of alternative numerical algorithms.

  6. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  7. Construction and Setup of a Bench-scale Algal Photosynthetic Bioreactor with Temperature, Light, and pH Monitoring for Kinetic Growth Tests.

    PubMed

    Karam, Amanda L; McMillan, Catherine C; Lai, Yi-Chun; de Los Reyes, Francis L; Sederoff, Heike W; Grunden, Amy M; Ranjithan, Ranji S; Levis, James W; Ducoste, Joel J

    2017-06-14

    The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software.

  8. Welding current and melting rate in GMAW of aluminium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, S.; Rao, U.R.K.; Aghakhani, M.

    1996-12-31

    Studies on GMAW of aluminium and its alloy 5083, revealed that the welding current and melting rate were affected by any change in wire feed rate, arc voltage, nozzle to plate distance, welding speed and torch angle. Empirical models have been presented to determine accurately the welding current and melting rate for any set of these parameters. These results can be utilized for determining accurately the heat input into the workpiece from which reliable predictions can be made about the mechanical and the metallurgical properties of a welded joint. The analysis of the model also helps in providing a vitalmore » information about the static V-I characteristics of the welding power source. The models were developed using a two-level fractional factorial design. The adequacy of the model was tested by the use of analysis of variance technique and the significance of the coefficients was tested by the student`s t test. The estimated and observed values of the welding current and melting rate have been shown on a scatter diagram and the interaction effects of different parameters involved have been presented in graphical forms.« less

  9. Construction and Setup of a Bench-scale Algal Photosynthetic Bioreactor with Temperature, Light, and pH Monitoring for Kinetic Growth Tests

    PubMed Central

    Karam, Amanda L.; McMillan, Catherine C.; Lai, Yi-Chun; de los Reyes, Francis L.; Sederoff, Heike W.; Grunden, Amy M.; Ranjithan, Ranji S.; Levis, James W.; Ducoste, Joel J.

    2017-01-01

    The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software. PMID:28654054

  10. Evaluating the effectiveness of the MASW technique in a geologically complex terrain

    NASA Astrophysics Data System (ADS)

    Anukwu, G. C.; Khalil, A. E.; Abdullah, K. B.

    2018-04-01

    MASW surveys carried at a number of sites in Pulau Pinang, Malaysia, showed complicated dispersion curves which consequently made the inversion into soil shear velocity model ambiguous. This research work details effort to define the source of these complicated dispersion curves. As a starting point, the complexity of the phase velocity spectrum is assumed to be due to either the surveying parameters or the elastic properties of the soil structures. For the former, the surveying was carried out using different parameters. The complexities were persistent for the different surveying parameters, an indication that the elastic properties of the soil structure could be the reason. In order to exploit this assumption, a synthetic modelling approach was adopted using information from borehole, literature and geologically plausible models. Results suggest that the presence of irregular variation in the stiffness of the soil layers, high stiffness contrast and relatively shallow bedrock, results in a quite complex f-v spectrum, especially at frequencies lower than 20Hz, making it difficult to accurately extract the dispersion curve below this frequency. As such, for MASW technique, especially in complex geological situations as demonstrated, great care should be taken during the data processing and inversion to obtain a model that accurately depicts the subsurface.

  11. Numerical simulations of merging black holes for gravitational-wave astronomy

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey

    2014-03-01

    Gravitational waves from merging binary black holes (BBHs) are among the most promising sources for current and future gravitational-wave detectors. Accurate models of these waves are necessary to maximize the number of detections and our knowledge of the waves' sources; near the time of merger, the waves can only be computed using numerical-relativity simulations. For optimal application to gravitational-wave astronomy, BBH simulations must achieve sufficient accuracy and length, and all relevant regions of the BBH parameter space must be covered. While great progress toward these goals has been made in the almost nine years since BBH simulations became possible, considerable challenges remain. In this talk, I will discuss current efforts to meet these challenges, and I will present recent BBH simulations produced using the Spectral Einstein Code, including a catalog of publicly available gravitational waveforms [black-holes.org/waveforms]. I will also discuss simulations of merging black holes with high mass ratios and with spins nearly as fast as possible, the most challenging regions of the BBH parameter space.

  12. Fecal consistency as related to dietary composition in lactating Holstein cows.

    PubMed

    Ireland-Perry, R L; Stallings, C C

    1993-04-01

    A trial was designed to study the relationships of dietary fiber and protein percentage and source to fecal consistency in lactating cattle. Thirty Holstein cows were assigned randomly to one of six TMR through four 21-d periods. The TMR were formulated to contain 17 or 25% ADF and CP of 15 or 22% with soybean meal supplementation or 22% with a combination of corn gluten and soybean meals. Two forage combinations were corn silage with or without alfalfa. Fecal consistency was evaluated using a four-point visual observation scale. Lower dietary fiber reduced fecal pH, score, NDF, and ADF but increased fecal DM and starch. A higher percentage of soybean meal lowered fecal DM and fecal score. Forage source affected fecal DM, NDF, ADF, and starch, but not pH or score. Prediction of fecal score from dietary components and cow parameters resulted in dietary DM percentage and 4% FCM as the most related variables. Accurate prediction of fecal consistency score from dietary and cow parameters was not possible.

  13. Parameter identification of piezoelectric hysteresis model based on improved artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Zhou, Kexin; Zhang, Yeming

    2018-04-01

    The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.

  14. Optical variability of extragalactic objects used to tie the HIPPARCOS reference frame to an extragalactic system using Hubble space telescope observations

    NASA Technical Reports Server (NTRS)

    Bozyan, Elizabeth P.; Hemenway, Paul D.; Argue, A. Noel

    1990-01-01

    Observations of a set of 89 extragalactic objects (EGOs) will be made with the Hubble Space Telescope Fine Guidance Sensors and Planetary Camera in order to link the HIPPARCOS Instrumental System to an extragalactic coordinate system. Most of the sources chosen for observation contain compact radio sources and stellarlike nuclei; 65 percent are optical variables beyond a 0.2 mag limit. To ensure proper exposure times, accurate mean magnitudes are necessary. In many cases, the average magnitudes listed in the literature were not adequate. The literature was searched for all relevant photometric information for the EGOs, and photometric parameters were derived, including mean magnitude, maximum range, and timescale of variability. This paper presents the results of that search and the parameters derived. The results will allow exposure times to be estimated such that an observed magnitude different from the tabular magnitude by 0.5 mag in either direction will not degrade the astrometric centering ability on a Planetary Camera CCD frame.

  15. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  16. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  17. Determination of X-ray flux using silicon pin diodes

    PubMed Central

    Owen, Robin L.; Holton, James M.; Schulze-Briese, Clemens; Garman, Elspeth F.

    2009-01-01

    Accurate measurement of photon flux from an X-ray source, a parameter required to calculate the dose absorbed by the sample, is not yet routinely available at macromolecular crystallography beamlines. The development of a model for determining the photon flux incident on pin diodes is described here, and has been tested on the macromolecular crystallography beamlines at both the Swiss Light Source, Villigen, Switzerland, and the Advanced Light Source, Berkeley, USA, at energies between 4 and 18 keV. These experiments have shown that a simple model based on energy deposition in silicon is sufficient for determining the flux incident on high-quality silicon pin diodes. The derivation and validation of this model is presented, and a web-based tool for the use of the macromolecular crystallography and wider synchrotron community is introduced. PMID:19240326

  18. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model.

    PubMed

    Hiatt, Jessica R; Davis, Stephen D; Rivard, Mark J

    2015-06-01

    The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Monte Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10(10) histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an (125)I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.

  19. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, Jessica R.; Davis, Stephen D.; Rivard, Mark J., E-mail: mark.j.rivard@gmail.com

    2015-06-15

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Methods: Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Montemore » Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. Results: The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an {sup 125}I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.« less

  20. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  1. Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.

    PubMed

    de Barros, Louis; Dietrich, Michel

    2008-03-01

    Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.

  2. SU-F-T-669: Commissioning of An Electronic Brachytherapy System for Targeted Mouse Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culberson, W; Micka, J; Carchman, E

    Purpose: The aim of this study was to commission the Xoft Axxent™ electronic brachytherapy (eBT) source and 10 mm diameter surface applicator with NIST traceability for targeted irradiations of mouse anal carcinomas. Methods: The Xoft Axxent™ electronic brachytherapy (eBT) and 10 mm diameter surface applicator was chosen by the collaborating physician as a radiation delivery mechanism for mouse anal carcinomas. The target dose was 2 Gy at a depth of 3 mm in tissue to be delivered in a single fraction. To implement an accurate and reliable irradiation plan, the system was commissioned by first determining the eBT source outputmore » and corresponding dose rate at a depth of 3 mm in tissue. This was determined through parallel-plate ion chamber measurements and published conversion factors. Well-type ionization chamber measurements were used to determine a transfer coefficient, which correlates the measured dose rate at 3 mm to the NIST-traceable quantity, air-kerma rate at 50 cm in air, for eBT sources. By correlating these two quantities, daily monitoring in the well chamber becomes an accurate and efficient quality assurance technique. Once the dose-rate was determined, a treatment recipe was developed and confirmed with chamber measurements to deliver the requested dose. Radiochromic film was used to verify the dose distribution across the field. Results: Dose rates at 3 mm depth in tissue were determined for two different Xoft Axxent™ sources and correlated with NIST-traceable well-type ionization chamber measurements. Unique transfer coefficients were determined for each source and the treatment recipe was validated by measurements. Film profiles showed a uniform dose distribution across the field. Conclusion: A Xoft Axxent™ eBT system was successfully commissioned for use in the irradiation of mouse rectal tumors. Dose rates in tissue were determined as well as other pertinent parameters to ensure accurate delivery of dose to the target region.« less

  3. A virtual photon energy fluence model for Monte Carlo dose calculation.

    PubMed

    Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan

    2003-03-01

    The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields.

  4. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  5. A method for determining the conversion efficiency of multiple-cell photovoltaic devices

    NASA Astrophysics Data System (ADS)

    Glatfelter, Troy; Burdick, Joseph

    A method for accurately determining the conversion efficiency of any multiple-cell photovoltaic device under any arbitrary reference spectrum is presented. This method makes it possible to obtain not only the short-circuit current, but also the fill factor, the open-circuit voltage, and hence the conversion efficiency of a multiple-cell device under any reference spectrum. Results are presented which allow a comparison of the I-V parameters of two-terminal, two- and three-cell tandem devices measured under a multiple-source simulator with the same parameters measured under different reference spectra. It is determined that the uncertainty in the conversion efficiency of a multiple-cell photovoltaic device obtained with this method is less than +/-3 percent.

  6. Field Application of the Micro Biological Survey Method for a Simple and Effective Assessment of the Microbiological Quality of Water Sources in Developing Countries

    PubMed Central

    Arienzo, Alyexandra; Sobze, Martin Sanou; Wadoum, Raoul Emeric Guetiya; Losito, Francesca; Colizzi, Vittorio; Antonini, Giovanni

    2015-01-01

    According to the World Health Organization (WHO) guidelines, “safe drinking-water must not represent any significant risk to health over a lifetime of consumption, including different sensitivities that may occur between life stages”. Traditional methods of water analysis are usually complex, time consuming and require an appropriately equipped laboratory, specialized personnel and expensive instrumentation. The aim of this work was to apply an alternative method, the Micro Biological Survey (MBS), to analyse for contaminants in drinking water. Preliminary experiments were carried out to demonstrate the linearity and accuracy of the MBS method and to verify the possibility of using the evaluation of total coliforms in 1 mL of water as a sufficient parameter to roughly though accurately determine water microbiological quality. The MBS method was then tested “on field” to assess the microbiological quality of water sources in the city of Douala (Cameroon, Central Africa). Analyses were performed on both dug and drilled wells in different periods of the year. Results confirm that the MBS method appears to be a valid and accurate method to evaluate the microbiological quality of many water sources and it can be of valuable aid in developing countries. PMID:26308038

  7. Field Application of the Micro Biological Survey Method for a Simple and Effective Assessment of the Microbiological Quality of Water Sources in Developing Countries.

    PubMed

    Arienzo, Alyexandra; Sobze, Martin Sanou; Wadoum, Raoul Emeric Guetiya; Losito, Francesca; Colizzi, Vittorio; Antonini, Giovanni

    2015-08-25

    According to the World Health Organization (WHO) guidelines, "safe drinking-water must not represent any significant risk to health over a lifetime of consumption, including different sensitivities that may occur between life stages". Traditional methods of water analysis are usually complex, time consuming and require an appropriately equipped laboratory, specialized personnel and expensive instrumentation. The aim of this work was to apply an alternative method, the Micro Biological Survey (MBS), to analyse for contaminants in drinking water. Preliminary experiments were carried out to demonstrate the linearity and accuracy of the MBS method and to verify the possibility of using the evaluation of total coliforms in 1 mL of water as a sufficient parameter to roughly though accurately determine water microbiological quality. The MBS method was then tested "on field" to assess the microbiological quality of water sources in the city of Douala (Cameroon, Central Africa). Analyses were performed on both dug and drilled wells in different periods of the year. Results confirm that the MBS method appears to be a valid and accurate method to evaluate the microbiological quality of many water sources and it can be of valuable aid in developing countries.

  8. Problems encountered with the use of simulation in an attempt to enhance interpretation of a secondary data source in epidemiologic mental health research

    PubMed Central

    2010-01-01

    Background The longitudinal epidemiology of major depressive episodes (MDE) is poorly characterized in most countries. Some potentially relevant data sources may be underutilized because they are not conducive to estimating the most salient epidemiologic parameters. An available data source in Canada provides estimates that are potentially valuable, but that are difficult to apply in clinical or public health practice. For example, weeks depressed in the past year is assessed in this data source whereas episode duration would be of more interest. The goal of this project was to derive, using simulation, more readily interpretable parameter values from the available data. Findings The data source was a Canadian longitudinal study called the National Population Health Survey (NPHS). A simulation model representing the course of depressive episodes was used to reshape estimates deriving from binary and ordinal logistic models (fit to the NPHS data) into equations more capable of informing clinical and public health decisions. Discrete event simulation was used for this purpose. Whereas the intention was to clarify a complex epidemiology, the models themselves needed to become excessively complex in order to provide an accurate description of the data. Conclusions Simulation methods are useful in circumstances where a representation of a real-world system has practical value. In this particular scenario, the usefulness of simulation was limited both by problems with the data source and by inherent complexity of the underlying epidemiology. PMID:20796271

  9. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  10. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  11. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  12. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Confidence assignment for mass spectrometry based peptide identifications via the extreme value distribution.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2016-09-01

    There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  14. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    PubMed Central

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  15. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  16. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  17. Experimental Identification and Characterization of Multirotor UAV Propulsion

    NASA Astrophysics Data System (ADS)

    Kotarski, Denis; Krznar, Matija; Piljek, Petar; Simunic, Nikola

    2017-07-01

    In this paper, an experimental procedure for the identification and characterization of multirotor Unmanned Aerial Vehicle (UAV) propulsion is presented. Propulsion configuration needs to be defined precisely in order to achieve required flight performance. Based on the accurate dynamic model and empirical measurements of multirotor propulsion physical parameters, it is possible to design diverse configurations with different characteristics for various purposes. As a case study, we investigated design considerations for a micro indoor multirotor which is suitable for control algorithm implementation in structured environment. It consists of open source autopilot, sensors for indoor flight, “take off the shelf” propulsion components and frame. The series of experiments were conducted to show the process of parameters identification and the procedure for analysis and propulsion characterization. Additionally, we explore battery performance in terms of mass and specific energy. Experimental results show identified and estimated propulsion parameters through which blade element theory is verified.

  18. A New Network-Based Approach for the Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.

    2017-12-01

    Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of evets.

  19. Zemax simulations describing collective effects in transition and diffraction radiation.

    PubMed

    Bisesto, F G; Castellano, M; Chiadroni, E; Cianchi, A

    2018-02-19

    Transition and diffraction radiation from charged particles is commonly used for diagnostics purposes in accelerator facilities as well as THz sources for spectroscopy applications. Therefore, an accurate analysis of the emission process and the transport optics is crucial to properly characterize the source and precisely retrieve beam parameters. In this regard, we have developed a new algorithm, based on Zemax, to simulate both transition and diffraction radiation as generated by relativistic electron bunches, therefore considering collective effects. In particular, unlike other previous works, we take into account electron beam physical size and transverse momentum, reproducing some effects visible on the produced radiation, not observable in a single electron analysis. The simulation results have been compared with two experiments showing an excellent agreement.

  20. Predicting ESI/MS Signal Change for Anions in Different Solvents.

    PubMed

    Kruve, Anneli; Kaupmees, Karl

    2017-05-02

    LC/ESI/MS is a technique widely used for qualitative and quantitative analysis in various fields. However, quantification is currently possible only for compounds for which the standard substances are available, as the ionization efficiency of different compounds in ESI source differs by orders of magnitude. In this paper we present an approach for quantitative LC/ESI/MS analysis without standard substances. This approach relies on accurately predicting the ionization efficiencies in ESI source based on a model, which uses physicochemical parameters of analytes. Furthermore, the model has been made transferable between different mobile phases and instrument setups by using a suitable set of calibration compounds. This approach has been validated both in flow injection and chromatographic mode with gradient elution.

  1. Probing jets from young embedded sources

    NASA Astrophysics Data System (ADS)

    Nisini, Brunella

    2017-08-01

    Jets are intimately related to the process of star formation and disc accretion. Our present knowledge of this key ingredient in protostars mostly relies on observations of optical jets from T Tauri stars, where the original circumstellar envelope has been already cleared out. However, to understand how jets are originally formed and how their properties evolve with time, detailed observations of young accreting protostars, i.e. the class 0/I sources, are mandatory. The study of class0/I jets will be revolutionised by JWST, able to penetrate protostars dusty envelopes with unprecedented sensitivity and resolution. However, complementary information on parameters inferred from lines in different excitation regimes, for at least a representative sample of a few bright sources, is essential for a correct interpretation of the JWST results. Here we propose to observe four prototype bright jets from class0/I sources with the WFC3 in narrow band filters in order to acquire high angular resolution images in the [OI]6300A, [FeII]1.25 and [FeII]1.64um lines. These images will be used to: 1) provide accurate extinction maps of the jets that will be an important archival reference for any future observation on these jets. 2) measure key parameters as the mass flux, the iron abundance and the jet collimation on the hot gas component of the jets. These information will provide an invaluable reference frame for a comparison with similar parameters measured by JWST in a different gas regime. In addition, these observations will allow us to confront the properties of class 0/I jets with those of the more evolved T Tauri stars.

  2. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  3. Approach to identifying pollutant source and matching flow field

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang

    2013-07-01

    Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.

  4. System Identification and Verification of Rotorcraft UAVs

    NASA Astrophysics Data System (ADS)

    Carlton, Zachary M.

    The task of a controls engineer is to design and implement control logic. To complete this task, it helps tremendously to have an accurate model of the system to be controlled. Obtaining a very accurate system model is not a trivial one, as much time and money is usually associated with the development of such a model. A typical physics based approach can require hundreds of hours of flight time. In an iterative process the model is tuned in such a way that it accurately models the physical system's response. This process becomes even more complicated for unstable and highly non-linear systems such as the dynamics of rotorcraft. An alternate approach to solving this problem is to extract an accurate model by analyzing the frequency response of the system. This process involves recording the system's responses for a frequency range of input excitations. From this data, an accurate system model can then be deduced. Furthermore, it has been shown that with use of the software package CIFER® (Comprehensive Identification from FrEquency Responses), this process can both greatly reduce the cost of modeling a dynamic system and produce very accurate results. The topic of this thesis is to apply CIFER® to a quadcopter to extract a system model for the flight condition of hover. The quadcopter itself is comprised of off-the-shelf components with a Pixhack flight controller board running open source Ardupilot controller logic. In this thesis, both the closed and open loop systems are identified. The model is next compared to dissimilar flight data and verified in the time domain. Additionally, the ESC (Electronic Speed Controller) motor/rotor subsystem, which is comprised of all the vehicle's actuators, is also identified. This process required the development of a test bench environment, which included a GUI (Graphical User Interface), data pre and post processing, as well as the augmentation of the flight controller source code. This augmentation of code allowed for proper data logging rates of all needed parameters.

  5. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  6. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  7. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  8. Estimation of viscoelastic parameters in Prony series from shear wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Jae-Wook; Hong, Jung-Wuk, E-mail: j.hong@kaist.ac.kr, E-mail: jwhong@alum.mit.edu; Lee, Hyoung-Ki

    2016-06-21

    When acquiring accurate ultrasonic images, we must precisely estimate the mechanical properties of the soft tissue. This study investigates and estimates the viscoelastic properties of the tissue by analyzing shear waves generated through an acoustic radiation force. The shear waves are sourced from a localized pushing force acting for a certain duration, and the generated waves travel horizontally. The wave velocities depend on the mechanical properties of the tissue such as the shear modulus and viscoelastic properties; therefore, we can inversely calculate the properties of the tissue through parametric studies.

  9. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  10. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  11. VizieR Online Data Catalog: Multiwavelength photometry of CDFS X-ray sources (Brusa+, 2009)

    NASA Astrophysics Data System (ADS)

    Brusa, M.; Fiore, F.; Santini, P.; Grazian, A.; Comastri, A.; Zamorani, G.; Hasinger, G.; Merloni, A.; Civano, F.; Fontana, A.; Mainieri, V.

    2010-03-01

    The co-evolution of host galaxies and the active black holes which reside in their centre is one of the most important topics in modern observational cosmology. Here we present a study of the properties of obscured active galactic nuclei (AGN) detected in the CDFS 1 Ms observation and their host galaxies. We limited the analysis to the MUSIC area, for which deep K-band observations obtained with ISAAC@VLT are available, ensuring accurate identifications of the counterparts of the X-ray sources as well as reliable determination of photometric redshifts and galaxy parameters, such as stellar masses and star formation rates. In particular, we: 1) refined the X-ray/infrared/optical association of 179 sources in the MUSIC area detected in the Chandra observation; 2) studied the host galaxies observed and rest frame colors and properties. (2 data files).

  12. Suitability of Organic Matter Surrogates to Predict Trihalomethane Formation in Drinking Water Sources

    PubMed Central

    Pifer, Ashley D.; Fairey, Julian L.

    2014-01-01

    Abstract Broadly applicable disinfection by-product (DBP) precursor surrogate parameters could be leveraged at drinking water treatment plants (DWTPs) to curb formation of regulated DBPs, such as trihalomethanes (THMs). In this study, dissolved organic carbon (DOC), ultraviolet absorbance at 254 nm (UV254), fluorescence excitation/emission wavelength pairs (IEx/Em), and the maximum fluorescence intensities (FMAX) of components from parallel factor (PARAFAC) analysis were evaluated as total THM formation potential (TTHMFP) precursor surrogate parameters. A diverse set of source waters from eleven DWTPs located within watersheds underlain by six different soil orders were coagulated with alum at pH 6, 7, and 8, resulting in 44 sample waters. DOC, UV254, IEx/Em, and FMAX values were measured to characterize dissolved organic matter in raw and treated waters and THMs were quantified following formation potential tests with free chlorine. For the 44 sample waters, the linear TTHMFP correlation with UV254 was stronger (r2=0.89) than I240/562 (r2=0.81, the strongest surrogate parameter from excitation/emission matrix pair picking), FMAX from a humic/fulvic acid-like PARAFAC component (r2=0.78), and DOC (r2=0.75). Results indicate that UV254 was the most accurate TTHMFP precursor surrogate parameter assessed for a diverse group of raw and alum-coagulated waters. PMID:24669183

  13. Development of attenuation and diffraction corrections for linear and nonlinear Rayleigh surface waves radiating from a uniform line source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr; Cho, Sungjong; Zhang, Shuzeng

    2016-04-15

    In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave ismore » defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.« less

  14. Prediction of broadband ground-motion time histories: Hybrid low/high-frequency method with correlated random source parameters

    USGS Publications Warehouse

    Liu, P.; Archuleta, R.J.; Hartzell, S.H.

    2006-01-01

    We present a new method for calculating broadband time histories of ground motion based on a hybrid low-frequency/high-frequency approach with correlated source parameters. Using a finite-difference method we calculate low- frequency synthetics (< ∼1 Hz) in a 3D velocity structure. We also compute broadband synthetics in a 1D velocity model using a frequency-wavenumber method. The low frequencies from the 3D calculation are combined with the high frequencies from the 1D calculation by using matched filtering at a crossover frequency of 1 Hz. The source description, common to both the 1D and 3D synthetics, is based on correlated random distributions for the slip amplitude, rupture velocity, and rise time on the fault. This source description allows for the specification of source parameters independent of any a priori inversion results. In our broadband modeling we include correlation between slip amplitude, rupture velocity, and rise time, as suggested by dynamic fault modeling. The method of using correlated random source parameters is flexible and can be easily modified to adjust to our changing understanding of earthquake ruptures. A realistic attenuation model is common to both the 3D and 1D calculations that form the low- and high-frequency components of the broadband synthetics. The value of Q is a function of the local shear-wave velocity. To produce more accurate high-frequency amplitudes and durations, the 1D synthetics are corrected with a randomized, frequency-dependent radiation pattern. The 1D synthetics are further corrected for local site and nonlinear soil effects by using a 1D nonlinear propagation code and generic velocity structure appropriate for the site’s National Earthquake Hazards Reduction Program (NEHRP) site classification. The entire procedure is validated by comparison with the 1994 Northridge, California, strong ground motion data set. The bias and error found here for response spectral acceleration are similar to the best results that have been published by others for the Northridge rupture.

  15. Localized N20 Component of Somatosensory Evoked Magnetic Fields in Frontoparietal Brain Tumor Patients Using Noise-Normalized Approaches.

    PubMed

    Elaina, Nor Safira; Malik, Aamir Saeed; Shams, Wafaa Khazaal; Badruddin, Nasreen; Abdullah, Jafri Malin; Reza, Mohammad Faruque

    2018-06-01

    To localize sensorimotor cortical activation in 10 patients with frontoparietal tumors using quantitative magnetoencephalography (MEG) with noise-normalized approaches. Somatosensory evoked magnetic fields (SEFs) were elicited in 10 patients with somatosensory tumors and in 10 control participants using electrical stimulation of the median nerve via the right and left wrists. We localized the N20m component of the SEFs using dynamic statistical parametric mapping (dSPM) and standardized low-resolution brain electromagnetic tomography (sLORETA) combined with 3D magnetic resonance imaging (MRI). The obtained coordinates were compared between groups. Finally, we statistically evaluated the N20m parameters across hemispheres using non-parametric statistical tests. The N20m sources were accurately localized to Brodmann area 3b in all members of the control group and in seven of the patients; however, the sources were shifted in three patients relative to locations outside the primary somatosensory cortex (SI). Compared with the affected (tumor) hemispheres in the patient group, N20m amplitudes and the strengths of the current sources were significantly lower in the unaffected hemispheres and in both hemispheres of the control group. These results were consistent for both dSPM and sLORETA approaches. Tumors in the sensorimotor cortex lead to cortical functional reorganization and an increase in N20m amplitude and current-source strengths. Noise-normalized approaches for MEG analysis that are integrated with MRI show accurate and reliable localization of sensorimotor function.

  16. Estimating virus occurrence using Bayesian modeling in multiple drinking water systems of the United States.

    PubMed

    Varughese, Eunice A; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer L; Fout, G Shay; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T; Keely, Scott P

    2018-04-01

    Drinking water treatment plants rely on purification of contaminated source waters to provide communities with potable water. One group of possible contaminants are enteric viruses. Measurement of viral quantities in environmental water systems are often performed using polymerase chain reaction (PCR) or quantitative PCR (qPCR). However, true values may be underestimated due to challenges involved in a multi-step viral concentration process and due to PCR inhibition. In this study, water samples were concentrated from 25 drinking water treatment plants (DWTPs) across the US to study the occurrence of enteric viruses in source water and removal after treatment. The five different types of viruses studied were adenovirus, norovirus GI, norovirus GII, enterovirus, and polyomavirus. Quantitative PCR was performed on all samples to determine presence or absence of these viruses in each sample. Ten DWTPs showed presence of one or more viruses in source water, with four DWTPs having treated drinking water testing positive. Furthermore, PCR inhibition was assessed for each sample using an exogenous amplification control, which indicated that all of the DWTP samples, including source and treated water samples, had some level of inhibition, confirming that inhibition plays an important role in PCR-based assessments of environmental samples. PCR inhibition measurements, viral recovery, and other assessments were incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters. Published by Elsevier B.V.

  17. Combining EEG and MEG for the Reconstruction of Epileptic Activity Using a Calibrated Realistic Volume Conductor Model

    PubMed Central

    Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann

    2014-01-01

    To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208

  18. Investigating the value of passive microwave observations for monitoring volcanic eruption source parameters

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Cimini, Domenico; Marzano, Frank

    2016-04-01

    Volcanic eruptions inject both gas and solid particles into the Atmosphere. Solid particles are made by mineral fragments of different sizes (from few microns to meters), generally referred as tephra. Tephra from volcanic eruptions has enormous impacts on social and economical activities through the effects on the environment, climate, public health, and air traffic. The size, density and shape of a particle determine its fall velocity and thus residence time in the Atmosphere. Larger particles tend to fall quickly in the proximity of the volcano, while smaller particles may remain suspended for several days and thus may be transported by winds for thousands of km. Thus, the impact of such hazards involves local as well as large scales effects. Local effects involve mostly the large sized particles, while large scale effects are caused by the transport of the finest ejected tephra (ash) through the atmosphere. Forecasts of ash paths in the atmosphere are routinely run after eruptions using dispersion models. These models make use of meteorological and volcanic source parameters. The former are usually available as output of numerical weather prediction models or large scale reanalysis. Source parameters characterize the volcanic eruption near the vent; these are mainly the ash mass concentration along the vertical column and the top altitude of the volcanic plume, which is strictly related to the flux of the mass ejected at the emission source. These parameters should be known accurately and continuously; otherwise, strong hypothesis are usually needed, leading to large uncertainty in the dispersion forecasts. However, direct observations during an eruption are typically dangerous and impractical. Thus, satellite remote sensing is often exploited to monitor volcanic emissions, using visible (VIS) and infrared (IR) channels available on both Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO) satellites. VIS and IR satellite imagery are very useful to monitor the dispersal fine-ash cloud, but tend to saturate near the source due to the strong optical extinction of ash cloud top layers. Conversely, observations at microwave (MW) channels from LEO satellites have demonstrated to carry additional information near the volcano source due to the relative lower opacity. This feature makes satellite MW complementary to IR radiometry for estimating source parameters close to the volcano emission, at the cost of coarser spatial resolution. The presentation shows the value of passive MW observations for the detection and quantitative retrieval of volcanic emission source parameters through the investigation of notable case studies, such as the eruptions of Grímsvötn (Iceland, May 2011) and Calbuco (Cile, April 2015), observed by the Special Sensor Microwave Imager/Sounder and the Advanced Technology Microwave Sounder.

  19. Effective atomic numbers and electron densities of some human tissues and dosimetric materials for mean energies of various radiation sources relevant to radiotherapy and medical applications

    NASA Astrophysics Data System (ADS)

    Kurudirek, Murat

    2014-09-01

    Effective atomic numbers, Zeff, and electron densities, neff, are convenient parameters used to characterise the radiation response of a multi-element material in many technical and medical applications. Accurate values of these physical parameters provide essential data in medical physics. In the present study, the effective atomic numbers and electron densities have been calculated for some human tissues and dosimetric materials such as Adipose Tissue (ICRU-44), Bone Cortical (ICRU-44), Brain Grey/White Matter (ICRU-44), Breast Tissue (ICRU-44), Lung Tissue (ICRU-44), Soft Tissue (ICRU-44), LiF TLD-100H, TLD-100, Water, Borosilicate Glass, PAG (Gel Dosimeter), Fricke (Gel Dosimeter) and OSL (Aluminium Oxide) using mean photon energies, Em, of various radiation sources. The used radiation sources are Pd-103, Tc-99, Ra-226, I-131, Ir-192, Co-60, 30 kVp, 40 kVp, 50 kVp (Intrabeam, Carl Zeiss Meditec) and 6 MV (Mohan-6 MV) sources. The Em values were then used to calculate Zeff and neff of the tissues and dosimetric materials for various radiation sources. Different calculation methods for Zeff such as the direct method, the interpolation method and Auto-Zeff computer program were used and agreements and disagreements between the used methods have been presented and discussed. It has been observed that at higher Em values agreement is quite satisfactory (Dif.<5%) between the adopted methods.

  20. A SPME-based method for rapidly and accurately measuring the characteristic parameter for DEHP emitted from PVC floorings.

    PubMed

    Cao, J; Zhang, X; Little, J C; Zhang, Y

    2017-03-01

    Semivolatile organic compounds (SVOCs) are present in many indoor materials. SVOC emissions can be characterized with a critical parameter, y 0 , the gas-phase SVOC concentration in equilibrium with the source material. To reduce the required time and improve the accuracy of existing methods for measuring y 0 , we developed a new method which uses solid-phase microextraction (SPME) to measure the concentration of an SVOC emitted by source material placed in a sealed chamber. Taking one typical indoor SVOC, di-(2-ethylhexyl) phthalate (DEHP), as the example, the experimental time was shortened from several days (even several months) to about 1 day, with relative errors of less than 5%. The measured y 0 values agree well with the results obtained by independent methods. The saturated gas-phase concentration (y sat ) of DEHP was also measured. Based on the Clausius-Clapeyron equation, a correlation that reveals the effects of temperature, the mass fraction of DEHP in the source material, and y sat on y 0 was established. The proposed method together with the correlation should be useful in estimating and controlling human exposure to indoor DEHP. The applicability of the present approach for other SVOCs and other SVOC source materials requires further study. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. The Atmospheric Infrared Sounder- An Overview

    NASA Technical Reports Server (NTRS)

    Larnbrigtsen, Bjorn; Fetzer, Eric; Lee, Sung-Yung; Irion, Fredrick; Hearty, Thomas; Gaiser, Steve; Pagano, Thomas; Aumann, Hartmut; Chahine, Moustafa

    2004-01-01

    The Atmospheric Infrared Sounder (AIRS) was launched in May 2002. Along with two companion microwave sensors, it forms the AIRS Sounding Suite. This system is the most advanced atmospheric sounding system to date, with measurement accuracies far surpassing those available on current weather satellites. The data products are calibrated radiances from all three sensors and a number of derived geophysical parameters, including vertical temperature and humidity profiles, surface temperature, cloud fraction, cIoud top pressure, and profiles of ozone. These products are generated under cloudy as well as clear conditions. An ongoing calibration validation effort has confirmed that the system is very accurate and stable, and many of the geophysical parameters have been validated. AIRS is in some cases more accurate than any other source and can therefore be difficult to validate, but this offers interesting new research opportunities. The applications for the AIRS products range from numerical weather prediction to atmospheric research - where the AIRS water vapor products near the surface and in the mid to upper troposphere will make it possible to characterize and model phenomena that are key for short-term atmospheric processes, such as weather patterns, to long-term processes, such as interannual cycles (e.g., El Nino) and climate change.

  2. Wideband RELAX and wideband CLEAN for aeroacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  3. Wideband RELAX and wideband CLEAN for aeroacoustic imaging.

    PubMed

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  4. SHIELD: FITGALAXY -- A Software Package for Automatic Aperture Photometry of Extended Sources

    NASA Astrophysics Data System (ADS)

    Marshall, Melissa

    2013-01-01

    Determining the parameters of extended sources, such as galaxies, is a common but time-consuming task. Finding a photometric aperture that encompasses the majority of the flux of a source and identifying and excluding contaminating objects is often done by hand - a lengthy and difficult to reproduce process. To make extracting information from large data sets both quick and repeatable, I have developed a program called FITGALAXY, written in IDL. This program uses minimal user input to automatically fit an aperture to, and perform aperture and surface photometry on, an extended source. FITGALAXY also automatically traces the outlines of surface brightness thresholds and creates surface brightness profiles, which can then be used to determine the radial properties of a source. Finally, the program performs automatic masking of contaminating sources. Masks and apertures can be applied to multiple images (regardless of the WCS solution or plate scale) in order to accurately measure the same source at different wavelengths. I present the fluxes, as measured by the program, of a selection of galaxies from the Local Volume Legacy Survey. I then compare these results with the fluxes given by Dale et al. (2009) in order to assess the accuracy of FITGALAXY.

  5. Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Day, S. J.; Joakim, B.

    2016-12-01

    We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.

  6. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  7. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  8. A modeling approach to predict acoustic nonlinear field generated by a transmitter with an aluminum lens.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Chen, Tao; Li, Faqi; Zhang, Dong

    2011-09-01

    In this work, the authors propose a modeling approach to compute the nonlinear acoustic field generated by a flat piston transmitter with an attached aluminum lens. In this approach, the geometrical parameters (radius and focal length) of a virtual source are initially determined by Snell's refraction law and then adjusted based on the Rayleigh integral result in the linear case. Then, this virtual source is used with the nonlinear spheroidal beam equation (SBE) model to predict the nonlinear acoustic field in the focal region. To examine the validity of this approach, the calculated nonlinear result is compared with those from the Westervelt and (Khokhlov-Zabolotskaya-Kuznetsov) KZK equations for a focal intensity of 7 kW/cm(2). Results indicate that this approach could accurately describe the nonlinear acoustic field in the focal region with less computation time. The proposed modeling approach is shown to accurately describe the nonlinear acoustic field in the focal region. Compared with the Westervelt equation, the computation time of this approach is significantly reduced. It might also be applicable for the widely used concave focused transmitter with a large aperture angle.

  9. Energy and time modelling of kerbside waste collection: Changes incurred when adding source separated food waste.

    PubMed

    Edwards, Joel; Othman, Maazuza; Burn, Stewart; Crossin, Enda

    2016-10-01

    The collection of source separated kerbside municipal FW (SSFW) is being incentivised in Australia, however such a collection is likely to increase the fuel and time a collection truck fleet requires. Therefore, waste managers need to determine whether the incentives outweigh the cost. With literature scarcely describing the magnitude of increase, and local parameters playing a crucial role in accurately modelling kerbside collection; this paper develops a new general mathematical model that predicts the energy and time requirements of a collection regime whilst incorporating the unique variables of different jurisdictions. The model, Municipal solid waste collect (MSW-Collect), is validated and shown to be more accurate at predicting fuel consumption and trucks required than other common collection models. When predicting changes incurred for five different SSFW collection scenarios, results show that SSFW scenarios require an increase in fuel ranging from 1.38% to 57.59%. There is also a need for additional trucks across most SSFW scenarios tested. All SSFW scenarios are ranked and analysed in regards to fuel consumption; sensitivity analysis is conducted to test key assumptions. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.

    PubMed

    Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan

    2016-07-01

    This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.

  11. A New Tool for CME Arrival Time Prediction using Machine Learning Algorithms: CAT-PUMA

    NASA Astrophysics Data System (ADS)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-03-01

    Coronal mass ejections (CMEs) are arguably the most violent eruptions in the solar system. CMEs can cause severe disturbances in interplanetary space and can even affect human activities in many aspects, causing damage to infrastructure and loss of revenue. Fast and accurate prediction of CME arrival time is vital to minimize the disruption that CMEs may cause when interacting with geospace. In this paper, we propose a new approach for partial-/full halo CME Arrival Time Prediction Using Machine learning Algorithms (CAT-PUMA). Via detailed analysis of the CME features and solar-wind parameters, we build a prediction engine taking advantage of 182 previously observed geo-effective partial-/full halo CMEs and using algorithms of the Support Vector Machine. We demonstrate that CAT-PUMA is accurate and fast. In particular, predictions made after applying CAT-PUMA to a test set unknown to the engine show a mean absolute prediction error of ∼5.9 hr within the CME arrival time, with 54% of the predictions having absolute errors less than 5.9 hr. Comparisons with other models reveal that CAT-PUMA has a more accurate prediction for 77% of the events investigated that can be carried out very quickly, i.e., within minutes of providing the necessary input parameters of a CME. A practical guide containing the CAT-PUMA engine and the source code of two examples are available in the Appendix, allowing the community to perform their own applications for prediction using CAT-PUMA.

  12. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  13. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  14. Limitations of current dosimetry for intracavitary accelerated partial breast irradiation with high dose rate iridium-192 and electronic brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Raffi, Julie A.

    Intracavitary accelerated partial breast irradiation (APBI) is a method of treating early stage breast cancer using a high dose rate (HDR) brachytherapy source positioned within the lumpectomy cavity. An expandable applicator stretches the surrounding tissue into a roughly spherical or elliptical shape and the dose is prescribed to 1 cm beyond the edge of the cavity. Currently, dosimetry for these treatments is most often performed using the American Association of Physicists in Medicine Task Group No. 43 (TG-43) formalism. The TG-43 dose-rate equation determines the dose delivered to a homogeneous water medium by scaling the measured source strength with standardized parameters that describe the radial and angular features of the dose distribution. Since TG-43 parameters for each source model are measured or calculated in a homogeneous water medium, the dosimetric effects of the patient's dimensions and composition are not accounted for. Therefore, the accuracy of TG-43 calculations for intracavitary APBI is limited by the presence of inhomogeneities in and around the target volume. Specifically, the breast is smaller than the phantoms used to determine TG-43 parameters and is surrounded by air, ribs, and lung tissue. Also, the composition of the breast tissue itself can affect the dose distribution. This dissertation is focused on investigating the limitations of TG-43 dosimetry for intracavitary APBI for two HDR brachytherapy sources: the VariSource TM VS2000 192Ir source and the AxxentRTM miniature x-ray source. The dose for various conditions was determined using thermoluminescent dosimeters (TLDs) and Monte Carlo (MC) calculations. Accurate measurements and calculations were achieved through the implementation of new measurement and simulation techniques and a novel breast phantom was developed to enable anthropomorphic phantom measurements. Measured and calculated doses for phantom and patient geometries were compared with TG-43 calculated doses to illustrate the limitations of TG-43 dosimetry for intracavitary APBI. TG-43 dose calculations overestimate the dose for regions approaching the lung and breast surface and underestimate the dose for regions in and beyond less-attenuating media such as lung tissue, and for lower energies, breast tissue as well.

  15. Properties of the Binary Black Hole Merger GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Carbon Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vañó-Viñuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Brügamin, B.; Campanelli, M.; Clark, M.; Hamberger, D.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Scheel, M. A.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-06-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 3 6-4+5M⊙ and 2 9-4+4M⊙ ; for each parameter we report the median value and the range of the 90% credible interval. The dimensionless spin magnitude of the more massive black hole is bound to be <0.7 (at 90% probability). The luminosity distance to the source is 41 0-180+160 Mpc , corresponding to a redshift 0.0 9-0.04+0.03 assuming standard cosmology. The source location is constrained to an annulus section of 610 deg2 , primarily in the southern hemisphere. The binary merges into a black hole of mass 6 2-4+4M⊙ and spin 0.6 7-0.07+0.05. This black hole is significantly more massive than any other inferred from electromagnetic observations in the stellar-mass regime.

  16. Properties of the Binary Black Hole Merger GW150914

    NASA Technical Reports Server (NTRS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Camp, J. B.

    2016-01-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 36(+5/-4) solar mass and 29(+4/-4) solar mass; for each parameter we report the median value and the range of the 90% credible interval. The dimensionless spin magnitude of the more massive black hole is bound to be less than 0.7 (at 90% probability). The luminosity distance to the source is 410(+160/-180) Mpc, corresponding to a redshift 0.09(+0.03/-0.04) assuming standard cosmology. The source location is constrained to an annulus section of 610 sq deg, primarily in the southern hemisphere. The binary merges into a black hole of mass 62(+4/-4) solar mass and spin 0.67(+0.05/-0.07). This black hole is significantly more massive than any other inferred from electromagnetic observations in the stellar-mass regime.

  17. Properties of the Binary Black Hole Merger GW150914.

    PubMed

    Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N; Aguiar, O D; Aiello, L; Ain, A; Ajith, P; Allen, B; Allocca, A; Altin, P A; Anderson, S B; Anderson, W G; Arai, K; Araya, M C; Arceneaux, C C; Areeda, J S; Arnaud, N; Arun, K G; Ascenzi, S; Ashton, G; Ast, M; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Babak, S; Bacon, P; Bader, M K M; Baker, P T; Baldaccini, F; Ballardin, G; Ballmer, S W; Barayoga, J C; Barclay, S E; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barta, D; Bartlett, J; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Baune, C; Bavigadda, V; Bazzan, M; Behnke, B; Bejger, M; Bell, A S; Bell, C J; Berger, B K; Bergman, J; Bergmann, G; Berry, C P L; Bersanetti, D; Bertolini, A; Betzwieser, J; Bhagwat, S; Bhandare, R; Bilenko, I A; Billingsley, G; Birch, J; Birney, R; Birnholtz, O; Biscans, S; Bisht, A; Bitossi, M; Biwer, C; Bizouard, M A; Blackburn, J K; Blair, C D; Blair, D G; Blair, R M; Bloemen, S; Bock, O; Bodiya, T P; Boer, M; Bogaert, G; Bogan, C; Bohe, A; Bojtos, P; Bond, C; Bondu, F; Bonnand, R; Boom, B A; Bork, R; Boschi, V; Bose, S; Bouffanais, Y; Bozzi, A; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Brillet, A; Brinkmann, M; Brisson, V; Brockill, P; Brooks, A F; Brown, D A; Brown, D D; Brown, N M; Buchanan, C C; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cadonati, L; Cagnoli, G; Cahillane, C; Calderón Bustillo, J; Callister, T; Calloni, E; Camp, J B; Cannon, K C; Cao, J; Capano, C D; Capocasa, E; Carbognani, F; Caride, S; Casanueva Diaz, J; Casentini, C; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C B; Cerboni Baiardi, L; Cerretani, G; Cesarini, E; Chakraborty, R; Chalermsongsak, T; Chamberlin, S J; Chan, M; Chao, S; Charlton, P; Chassande-Mottin, E; Chen, H Y; Chen, Y; Cheng, C; Chincarini, A; Chiummo, A; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, S; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Coccia, E; Cohadon, P-F; Colla, A; Collette, C G; Cominsky, L; Constancio, M; Conte, A; Conti, L; Cook, D; Corbitt, T R; Cornish, N; Corsi, A; Cortese, S; Costa, C A; Coughlin, M W; Coughlin, S B; Coulon, J-P; Countryman, S T; Couvares, P; Cowan, E E; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Craig, K; Creighton, J D E; Cripe, J; Crowder, S G; Cumming, A; Cunningham, L; Cuoco, E; Dal Canton, T; Danilishin, S L; D'Antonio, S; Danzmann, K; Darman, N S; Dattilo, V; Dave, I; Daveloza, H P; Davier, M; Davies, G S; Daw, E J; Day, R; DeBra, D; Debreczeni, G; Degallaix, J; De Laurentis, M; Deléglise, S; Del Pozzo, W; Denker, T; Dent, T; Dereli, H; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Devine, C; Dhurandhar, S; Díaz, M C; Di Fiore, L; Di Giovanni, M; Di Lieto, A; Di Pace, S; Di Palma, I; Di Virgilio, A; Dojcinoski, G; Dolique, V; Donovan, F; Dooley, K L; Doravari, S; Douglas, R; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S E; Edo, T B; Edwards, M C; Effler, A; Eggenstein, H-B; Ehrens, P; Eichholz, J; Eikenberry, S S; Engels, W; Essick, R C; Etienne, Z; Etzel, T; Evans, M; Evans, T M; Everett, R; Factourovich, M; Fafone, V; Fair, H; Fairhurst, S; Fan, X; Fang, Q; Farinon, S; Farr, B; Farr, W M; Fauchon-Jones, E; Favata, M; Fays, M; Fehrmann, H; Fejer, M M; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Fiori, I; Fiorucci, D; Fisher, R P; Flaminio, R; Fletcher, M; Fournier, J-D; Franco, S; Frasca, S; Frasconi, F; Frei, Z; Freise, A; Frey, R; Frey, V; Fricke, T T; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gabbard, H A G; Gaebel, S M; Gair, J R; Gammaitoni, L; Gaonkar, S G; Garufi, F; Gatto, A; Gaur, G; Gehrels, N; Gemme, G; Gendre, B; Genin, E; Gennai, A; George, J; Gergely, L; Germain, V; Ghosh, Archisman; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, K; Glaefke, A; Goetz, E; Goetz, R; Gondan, L; González, G; Gonzalez Castro, J M; Gopakumar, A; Gordon, N A; Gorodetsky, M L; Gossan, S E; Gosselin, M; Gouaty, R; Graef, C; Graff, P B; Granata, M; Grant, A; Gras, S; Gray, C; Greco, G; Green, A C; Groot, P; Grote, H; Grunewald, S; Guidi, G M; Guo, X; Gupta, A; Gupta, M K; Gushwa, K E; Gustafson, E K; Gustafson, R; Hacker, J J; Hall, B R; Hall, E D; Hammond, G; Haney, M; Hanke, M M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Hardwick, T; Harms, J; Harry, G M; Harry, I W; Hart, M J; Hartman, M T; Haster, C-J; Haughian, K; Healy, J; Heidmann, A; Heintze, M C; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Hennig, J; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hodge, K A; Hofman, D; Hollitt, S E; Holt, K; Holz, D E; Hopkins, P; Hosken, D J; Hough, J; Houston, E A; Howell, E J; Hu, Y M; Huang, S; Huerta, E A; Huet, D; Hughey, B; Husa, S; Huttner, S H; Huynh-Dinh, T; Idrisy, A; Indik, N; Ingram, D R; Inta, R; Isa, H N; Isac, J-M; Isi, M; Islas, G; Isogai, T; Iyer, B R; Izumi, K; Jacqmin, T; Jang, H; Jani, K; Jaranowski, P; Jawahar, S; Jiménez-Forteza, F; Johnson, W W; Johnson-McDaniel, N K; Jones, D I; Jones, R; Jonker, R J G; Ju, L; K, Haris; Kalaghatgi, C V; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karki, S; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, S; Kaur, T; Kawabe, K; Kawazoe, F; Kéfélian, F; Kehl, M S; Keitel, D; Kelley, D B; Kells, W; Kennedy, R; Key, J S; Khalaidovski, A; Khalili, F Y; Khan, I; Khan, S; Khan, Z; Khazanov, E A; Kijbunchoo, N; Kim, C; Kim, J; Kim, K; Kim, Nam-Gyu; Kim, Namjun; Kim, Y-M; King, E J; King, P J; Kinzel, D L; Kissel, J S; Kleybolte, L; Klimenko, S; Koehlenbeck, S M; Kokeyama, K; Koley, S; Kondrashov, V; Kontos, A; Korobko, M; Korth, W Z; Kowalska, I; Kozak, D B; Kringel, V; Krishnan, B; Królak, A; Krueger, C; Kuehn, G; Kumar, P; Kuo, L; Kutynia, A; Lackey, B D; Landry, M; Lange, J; Lantz, B; Lasky, P D; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, K; Lenon, A; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levin, Y; Levine, B M; Li, T G F; Libson, A; Littenberg, T B; Lockerbie, N A; Logue, J; Lombardi, A L; London, L T; Lord, J E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J D; Lousto, C O; Lovelace, G; Lück, H; Lundgren, A P; Luo, J; Lynch, R; Ma, Y; MacDonald, T; Machenschalk, B; MacInnis, M; Macleod, D M; Magaña-Sandoval, F; Magee, R M; Mageswaran, M; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Mandel, I; Mandic, V; Mangano, V; Mansell, G L; Manske, M; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A S; Maros, E; Martelli, F; Martellini, L; Martin, I W; Martin, R M; Martynov, D V; Marx, J N; Mason, K; Masserot, A; Massinger, T J; Masso-Reid, M; Matichard, F; Matone, L; Mavalvala, N; Mazumder, N; Mazzolo, G; McCarthy, R; McClelland, D E; McCormick, S; McGuire, S C; McIntyre, G; McIver, J; McManus, D J; McWilliams, S T; Meacher, D; Meadors, G D; Meidam, J; Melatos, A; Mendell, G; Mendoza-Gandara, D; Mercer, R A; Merilh, E; Merzougui, M; Meshkov, S; Messenger, C; Messick, C; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Middleton, H; Mikhailov, E E; Milano, L; Miller, J; Millhouse, M; Minenkov, Y; Ming, J; Mirshekari, S; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moggi, A; Mohan, M; Mohapatra, S R P; Montani, M; Moore, B C; Moore, C J; Moraru, D; Moreno, G; Morriss, S R; Mossavi, K; Mours, B; Mow-Lowry, C M; Mueller, C L; Mueller, G; Muir, A W; Mukherjee, Arunava; Mukherjee, D; Mukherjee, S; Mukund, N; Mullavey, A; Munch, J; Murphy, D J; Murray, P G; Mytidis, A; Nardecchia, I; Naticchioni, L; Nayak, R K; Necula, V; Nedkova, K; Nelemans, G; Neri, M; Neunzert, A; Newton, G; Nguyen, T T; Nielsen, A B; Nissanke, S; Nitz, A; Nocera, F; Nolting, D; Normandin, M E; Nuttall, L K; Oberling, J; Ochsner, E; O'Dell, J; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oliver, M; Oppermann, P; Oram, Richard J; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Ottens, R S; Overmier, H; Owen, B J; Pai, A; Pai, S A; Palamos, J R; Palashov, O; Palomba, C; Pal-Singh, A; Pan, H; Pan, Y; Pankow, C; Pannarale, F; Pant, B C; Paoletti, F; Paoli, A; Papa, M A; Paris, H R; Parker, W; Pascucci, D; Pasqualetti, A; Passaquieti, R; Passuello, D; Patricelli, B; Patrick, Z; Pearlstone, B L; Pedraza, M; Pedurand, R; Pekowsky, L; Pele, A; Penn, S; Perreca, A; Pfeiffer, H P; Phelps, M; Piccinni, O; Pichot, M; Piergiovanni, F; Pierro, V; Pillant, G; Pinard, L; Pinto, I M; Pitkin, M; Poggiani, R; Popolizio, P; Post, A; Powell, J; Prasad, J; Predoi, V; Premachandra, S S; Prestegard, T; Price, L R; Prijatelj, M; Principe, M; Privitera, S; Prodi, G A; Prokhorov, L; Puncken, O; Punturo, M; Puppo, P; Pürrer, M; Qi, H; Qin, J; Quetschke, V; Quintero, E A; Quitzow-James, R; Raab, F J; Rabeling, D S; Radkins, H; Raffai, P; Raja, S; Rakhmanov, M; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Read, J; Reed, C M; Regimbau, T; Rei, L; Reid, S; Reitze, D H; Rew, H; Reyes, S D; Ricci, F; Riles, K; Robertson, N A; Robie, R; Robinet, F; Rocchi, A; Rolland, L; Rollins, J G; Roma, V J; Romano, R; Romanov, G; Romie, J H; Rosińska, D; Röver, C; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Sachdev, S; Sadecki, T; Sadeghian, L; Salconi, L; Saleem, M; Salemi, F; Samajdar, A; Sammut, L; Sanchez, E J; Sandberg, V; Sandeen, B; Sanders, J R; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Sauter, O; Savage, R L; Sawadsky, A; Schale, P; Schilling, R; Schmidt, J; Schmidt, P; Schnabel, R; Schofield, R M S; Schönbeck, A; Schreiber, E; Schuette, D; Schutz, B F; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Serna, G; Setyawati, Y; Sevigny, A; Shaddock, D A; Shah, S; Shahriar, M S; Shaltev, M; Shao, Z; Shapiro, B; Shawhan, P; Sheperd, A; Shoemaker, D H; Shoemaker, D M; Siellez, K; Siemens, X; Sigg, D; Silva, A D; Simakov, D; Singer, A; Singer, L P; Singh, A; Singh, R; Singhal, A; Sintes, A M; Slagmolen, B J J; Smith, J R; Smith, N D; Smith, R J E; Son, E J; Sorazu, B; Sorrentino, F; Souradeep, T; Srivastava, A K; Staley, A; Steinke, M; Steinlechner, J; Steinlechner, S; Steinmeyer, D; Stephens, B C; Stevenson, S P; Stone, R; Strain, K A; Straniero, N; Stratta, G; Strauss, N A; Strigin, S; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, L; Sutton, P J; Swinkels, B L; Szczepańczyk, M J; Tacca, M; Talukder, D; Tanner, D B; Tápai, M; Tarabrin, S P; Taracchini, A; Taylor, R; Theeg, T; Thirugnanasambandam, M P; Thomas, E G; Thomas, M; Thomas, P; Thorne, K A; Thorne, K S; Thrane, E; Tiwari, S; Tiwari, V; Tokmakov, K V; Tomlinson, C; Tonelli, M; Torres, C V; Torrie, C I; Töyrä, D; Travasso, F; Traylor, G; Trifirò, D; Tringali, M C; Trozzo, L; Tse, M; Turconi, M; Tuyenbayev, D; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; van Bakel, N; van Beuzekom, M; van den Brand, J F J; Van Den Broeck, C; Vander-Hyde, D C; van der Schaaf, L; van der Sluys, M V; van Heijningen, J V; Vañó-Viñuales, A; van Veggel, A A; Vardaro, M; Vass, S; Vasúth, M; Vaulin, R; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Verkindt, D; Vetrano, F; Viceré, A; Vinciguerra, S; Vine, D J; Vinet, J-Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Voss, D; Vousden, W D; Vyatchanin, S P; Wade, A R; Wade, L E; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, G; Wang, H; Wang, M; Wang, X; Wang, Y; Ward, R L; Warner, J; Was, M; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Welborn, T; Wen, L; Weßels, P; Westphal, T; Wette, K; Whelan, J T; White, D J; Whiting, B F; Williams, R D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M H; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Worden, J; Wright, J L; Wu, G; Yablon, J; Yam, W; Yamamoto, H; Yancey, C C; Yap, M J; Yu, H; Yvert, M; Zadrożny, A; Zangrando, L; Zanolin, M; Zendri, J-P; Zevin, M; Zhang, F; Zhang, L; Zhang, M; Zhang, Y; Zhao, C; Zhou, M; Zhou, Z; Zhu, X J; Zucker, M E; Zuraw, S E; Zweizig, J; Boyle, M; Brügmann, B; Campanelli, M; Clark, M; Hamberger, D; Kidder, L E; Kinsey, M; Laguna, P; Ossokine, S; Scheel, M A; Szilagyi, B; Teukolsky, S; Zlochower, Y

    2016-06-17

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 36_{-4}^{+5}M_{⊙} and 29_{-4}^{+4}M_{⊙}; for each parameter we report the median value and the range of the 90% credible interval. The dimensionless spin magnitude of the more massive black hole is bound to be <0.7 (at 90% probability). The luminosity distance to the source is 410_{-180}^{+160}  Mpc, corresponding to a redshift 0.09_{-0.04}^{+0.03} assuming standard cosmology. The source location is constrained to an annulus section of 610  deg^{2}, primarily in the southern hemisphere. The binary merges into a black hole of mass 62_{-4}^{+4}M_{⊙} and spin 0.67_{-0.07}^{+0.05}. This black hole is significantly more massive than any other inferred from electromagnetic observations in the stellar-mass regime.

  18. A fast and robust method for moment tensor and depth determination of shallow seismic events in CTBT related studies.

    NASA Astrophysics Data System (ADS)

    Baker, Ben; Stachnik, Joshua; Rozhkov, Mikhail

    2017-04-01

    International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event according to the protocol to the Protocol to the Comprehensive Nuclear Test Ban Treaty. Determination of seismic event source mechanism and its depth is closely related to these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. In this presentation we demonstrate preliminary results obtained with the latter approach from an improved software design. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Posterior distributions of moment tensor parameters show narrow peaks where a significant number of reliable surface wave observations are available. For earthquake examples, fault orientation (strike, dip, and rake) posterior distributions also provide results consistent with published catalogues. Inclusion of observations on horizontal components will provide further constraints. In addition, the calculation of teleseismic P wave Green's Functions are improved through prior analysis to determine an appropriate attenuation parameter for each source-receiver path. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK events and shallow earthquakes using a new implementation of teleseismic P waves waveform fitting. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  19. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty

  20. BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciambur, B. C., E-mail: bciambur@swin.edu.au

    2015-09-10

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial,more » cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.« less

  1. Daniell method for power spectral density estimation in atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labuda, Aleksander

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less

  2. Radiative transfer of HCN: interpreting observations of hyperfine anomalies

    NASA Astrophysics Data System (ADS)

    Mullins, A. M.; Loughnane, R. M.; Redman, M. P.; Wiles, B.; Guegan, N.; Barrett, J.; Keto, E. R.

    2016-07-01

    Molecules with hyperfine splitting of their rotational line spectra are useful probes of optical depth, via the relative line strengths of their hyperfine components. The hyperfine splitting is particularly advantageous in interpreting the physical conditions of the emitting gas because with a second rotational transition, both gas density and temperature can be derived. For HCN however, the relative strengths of the hyperfine lines are anomalous. They appear in ratios which can vary significantly from source to source, and are inconsistent with local thermodynamic equilibrium (LTE). This is the HCN hyperfine anomaly, and it prevents the use of simple LTE models of HCN emission to derive reliable optical depths. In this paper, we demonstrate how to model HCN hyperfine line emission, and derive accurate line ratios, spectral line shapes and optical depths. We show that by carrying out radiative transfer calculations over each hyperfine level individually, as opposed to summing them over each rotational level, the anomalous hyperfine emission emerges naturally. To do this requires not only accurate radiative rates between hyperfine states, but also accurate collisional rates. We investigate the effects of different sets of hyperfine collisional rates, derived via the proportional method and through direct recoupling calculations. Through an extensive parameter sweep over typical low-mass star-forming conditions, we show the HCN line ratios to be highly variable to optical depth. We also reproduce an observed effect whereby the red-blue asymmetry of the hyperfine lines (an infall signature) switches sense within a single rotational transition.

  3. Using radiance predicted by the P3 approximation in a spherical geometry to predict tissue optical properties

    NASA Astrophysics Data System (ADS)

    Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John

    2001-01-01

    For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.

  4. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  5. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Automated Method for Estimating Nutation Time Constant Model Parameters for Spacecraft Spinning on Axis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Calculating an accurate nutation time constant (NTC), or nutation rate of growth, for a spinning upper stage is important for ensuring mission success. Spacecraft nutation, or wobble, is caused by energy dissipation anywhere in the system. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and, if it is in a state of resonance, the NTC can become short enough to violate mission constraints. The Spinning Slosh Test Rig (SSTR) is a forced-motion spin table where fluid dynamic effects in full-scale fuel tanks can be tested in order to obtain key parameters used to calculate the NTC. We accomplish this by independently varying nutation frequency versus the spin rate and measuring force and torque responses on the tank. This method was used to predict parameters for the Genesis, Contour, and Stereo missions, whose tanks were mounted outboard from the spin axis. These parameters are incorporated into a mathematical model that uses mechanical analogs, such as pendulums and rotors, to simulate the force and torque resonances associated with fluid slosh.

  7. Analytical model of diffuse reflectance spectrum of skin tissue

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.

    2014-01-01

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.

  8. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  9. Ecological prognosis near intensive acoustic sources

    NASA Astrophysics Data System (ADS)

    Kostarev, Stanislav A.; Makhortykh, Sergey A.; Rybak, Samuil A.

    2002-11-01

    The problem of a wave-field excitation in a ground from a quasiperiodic source, placed on the ground surface or on some depth in soil is investigated. The ecological situation in this case in many respects is determined by quality of the raised vibrations and noise forecast. In the present work the distributed source is modeled by the set of statistically linked compact sources on the surface or in the ground. Changes of parameters of the media along an axis and horizontal heterogeneity of environment are taken into account. Both analytical and numerical approaches are developed. The latter are included in the software package VibraCalc, allowing to calculate distribution of the elastic waves field in a ground from quasilinear sources. Accurate evaluation of vibration levels in buildings from high-intensity underground sources is fulfilled by modeling of the wave propagation in dissipative inhomogeneous elastic media. The model takes into account both bulk (longitudinal and shear) and surface Rayleigh waves. For the verification of the used approach a series of measurements was carried out near the experimental part of monorail road designed in Moscow. Both calculation and measurement results are presented in the paper.

  10. Ecological prognosis near intensive acoustic sources

    NASA Astrophysics Data System (ADS)

    Kostarev, Stanislav A.; Makhortykh, Sergey A.; Rybak, Samuil A.

    2003-04-01

    The problem of a wave field excitation in a ground from a quasi-periodic source, placed on the ground surface or at some depth in soil is investigated. The ecological situation in this case in many respects is determined by quality of the raised vibrations and noise forecast. In the present work the distributed source is modeled by the set of statistically linked compact sources on the surface or in the ground. Changes of parameters of the media along an axis and horizontal heterogeneity of environment are taken into account. Both analytical and numerical approaches are developed. The last are included in software package VibraCalc, allowing to calculate distribution of the elastic waves field in a ground from quasilinear sources. Accurate evaluation of vibration levels in buildings from high intensity under ground sources is fulfilled by modeling of the wave propagation in dissipative inhomogeneous elastic media. The model takes into account both bulk (longitudinal and shear) and surface Rayleigh waves. For the verification of used approach a series of measurements was carried out near the experimental part of monorail road designed in Moscow. Both calculation and measurements results are presented in the paper.

  11. GADRAS-DRF 18.6 User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, Steve M.; Thoreson, Greg G.; Theisen, Lisa A.

    2016-05-01

    The Gamma Detector Response and Analysis Software–Detector Response Function (GADRAS-DRF) application computes the response of gamma-ray and neutron detectors to incoming radiation. This manual provides step-by-step procedures to acquaint new users with the use of the application. The capabilities include characterization of detector response parameters, plotting and viewing measured and computed spectra, analyzing spectra to identify isotopes, and estimating source energy distributions from measured spectra. GADRAS-DRF can compute and provide detector responses quickly and accurately, giving users the ability to obtain usable results in a timely manner (a matter of seconds or minutes).

  12. Bayesian Integration of Information in Hippocampal Place Cells

    PubMed Central

    Madl, Tamas; Franklin, Stan; Chen, Ke; Montaldi, Daniela; Trappl, Robert

    2014-01-01

    Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters. PMID:24603429

  13. Millimeter- and submillimeter-wave characterization of various fabrics.

    PubMed

    Dunayevskiy, Ilya; Bortnik, Bartosz; Geary, Kevin; Lombardo, Russell; Jack, Michael; Fetterman, Harold

    2007-08-20

    Transmission measurements of 14 fabrics are presented in the millimeter-wave and submillimeter-wave electromagnetic regions from 130 GHz to 1.2 THz. Three independent sources and experimental set-ups were used to obtain accurate results over a wide spectral range. Reflectivity, a useful parameter for imaging applications, was also measured for a subset of samples in the submillimeter-wave regime along with polarization sensitivity of the transmitted beam and transmission through doubled layers. All of the measurements were performed in free space. Details of these experimental set-ups along with their respective challenges are presented.

  14. Robust estimation procedure in panel data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less

  15. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear materialmore » movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.« less

  16. The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.

    1992-05-01

    Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.

  17. Impact of signal scattering and parametric uncertainties on receiver operating characteristics

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.

    2017-05-01

    The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.

  18. The evaluation of the average energy parameters for spectra of quasimonoenergetic neutrons produced in (p,n)-reactions on solid tritium targets

    NASA Astrophysics Data System (ADS)

    Sosnin, A. N.; Shorin, V. S.

    1989-10-01

    Fast neutron cross-section measurements using quasimonoenergetic (p,n) neutron sources require the determination of the average neutron spectrum parameters such as the mean energy < E> and the variance D. In this paper a simple model has been considered for determining the < E>- andD-values. The approach takes into account the actual layout of the solid tritium target and the irradiated sample. It is valid for targets with a thickness of less than 1 mg/cm 2. It has been shown that the first and the second tritium distribution function moments < x> and < x2> are connected by simple analytical expressions with average characteristics of the neutron yield measured above the (p,n) reaction threshold energy. Our results are compared with accurate calculations for Sc-T targets.

  19. Open-source software for collision detection in external beam radiation therapy

    NASA Astrophysics Data System (ADS)

    Suriyakumar, Vinith M.; Xu, Renee; Pinter, Csaba; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Collision detection for external beam radiation therapy (RT) is important for eliminating the need for dryruns that aim to ensure patient safety. Commercial treatment planning systems (TPS) offer this feature but they are expensive and proprietary. Cobalt-60 RT machines are a viable solution to RT practice in low-budget scenarios. However, such clinics are hesitant to invest in these machines due to a lack of affordable treatment planning software. We propose the creation of an open-source room's eye view visualization module with automated collision detection as part of the development of an open-source TPS. METHODS: An openly accessible linac 3D geometry model is sliced into the different components of the treatment machine. The model's movements are based on the International Electrotechnical Commission standard. Automated collision detection is implemented between the treatment machine's components. RESULTS: The room's eye view module was built in C++ as part of SlicerRT, an RT research toolkit built on 3D Slicer. The module was tested using head and neck and prostate RT plans. These tests verified that the module accurately modeled the movements of the treatment machine and radiation beam. Automated collision detection was verified using tests where geometric parameters of the machine's components were changed, demonstrating accurate collision detection. CONCLUSION: Room's eye view visualization and automated collision detection are essential in a Cobalt-60 treatment planning system. Development of these features will advance the creation of an open-source TPS that will potentially help increase the feasibility of adopting Cobalt-60 RT.

  20. Tuning of Kalman filter parameters via genetic algorithm for state-of-charge estimation in battery management system.

    PubMed

    Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.

  1. Tuning of Kalman Filter Parameters via Genetic Algorithm for State-of-Charge Estimation in Battery Management System

    PubMed Central

    Ting, T. O.; Lim, Eng Gee

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041

  2. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  3. Mineralogies and source regions of near-Earth asteroids

    NASA Astrophysics Data System (ADS)

    Dunn, Tasha L.; Burbine, Thomas H.; Bottke, William F.; Clark, John P.

    2013-01-01

    Near-Earth Asteroids (NEAs) offer insight into a size range of objects that are not easily observed in the main asteroid belt. Previous studies on the diversity of the NEA population have relied primarily on modeling and statistical analysis to determine asteroid compositions. Olivine and pyroxene, the dominant minerals in most asteroids, have characteristic absorption features in the visible and near-infrared (VISNIR) wavelengths that can be used to determine their compositions and abundances. However, formulas previously used for deriving compositions do not work very well for ordinary chondrite assemblages. Because two-thirds of NEAs have ordinary chondrite-like spectral parameters, it is essential to determine accurate mineralogies. Here we determine the band area ratios and Band I centers of 72 NEAs with visible and near-infrared spectra and use new calibrations to derive the mineralogies 47 of these NEAs with ordinary chondrite-like spectral parameters. Our results indicate that the majority of NEAs have LL-chondrite mineralogies. This is consistent with results from previous studies but continues to be in conflict with the population of recovered ordinary chondrites, of which H chondrites are the most abundant. To look for potential correlations between asteroid size, composition, and source region, we use a dynamical model to determine the most probable source region of each NEA. Model results indicate that NEAs with LL chondrite mineralogies appear to be preferentially derived from the ν6 secular resonance. This supports the hypothesis that the Flora family, which lies near the ν6 resonance, is the source of the LL chondrites. With the exception of basaltic achondrites, NEAs with non-chondrite spectral parameters are slightly less likely to be derived from the ν6 resonance than NEAs with chondrite-like mineralogies. The population of NEAs with H, L, and LL chondrite mineralogies does not appear to be influenced by size, which would suggest that ordinary chondrites are not preferentially sourced from meter-sized objects due to Yarkovsky effect.

  4. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  5. On concentrated solute sources in faulted aquifers

    NASA Astrophysics Data System (ADS)

    Robinson, N. I.; Werner, A. D.

    2017-06-01

    Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.

  6. Impact of Monoenergetic Photon Sources on Nonproliferation Applications Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geddes, Cameron; Ludewigt, Bernhard; Valentine, John

    Near-monoenergetic photon sources (MPSs) have the potential to improve sensitivity at greatly reduced dose in existing applications and enable new capabilities in other applications, particularly where passive signatures do not penetrate or are insufficiently accurate. MPS advantages include the ability to select energy, energy spread, flux, and pulse structures to deliver only the photons needed for the application, while suppressing extraneous dose and background. Some MPSs also offer narrow angular divergence photon beams which can target dose and/or mitigate scattering contributions to image contrast degradation. Current bremsstrahlung photon sources (e.g., linacs and betatrons) produce photons over a broad range ofmore » energies, thus delivering unnecessary dose that in some cases also interferes with the signature to be detected and/or restricts operations. Current sources must be collimated (reducing flux) to generate narrow divergence beams. While MPSs can in principle resolve these issues, they remain at relatively low TRL status. Candidate MPS technologies for nonproliferation applications are now being developed, each of which has different properties (e.g. broad vs. narrow angular divergence). Within each technology, source parameters trade off against one another (e.g. flux vs. energy spread), representing a large operation space. This report describes a broad survey of potential applications, identification of high priority applications, and detailed simulations addressing those priority applications. Requirements were derived for each application, and analysis and simulations were conducted to define MPS parameters that deliver benefit. The results can inform targeting of MPS development to deliver strong impact relative to current systems.« less

  7. Accuracy of color prediction of anthraquinone dyes in methanol solution estimated from first principle quantum chemistry computations.

    PubMed

    Cysewski, Piotr; Jeliński, Tomasz

    2013-10-01

    The electronic spectrum of four different anthraquinones (1,2-dihydroxyanthraquinone, 1-aminoanthraquinone, 2-aminoanthraquinone and 1-amino-2-methylanthraquinone) in methanol solution was measured and used as reference data for theoretical color prediction. The visible part of the spectrum was modeled according to TD-DFT framework with a broad range of DFT functionals. The convoluted theoretical spectra were validated against experimental data by a direct color comparison in terms of CIE XYZ and CIE Lab tristimulus model color. It was found, that the 6-31G** basis set provides the most accurate color prediction and there is no need to extend the basis set since it does not improve the prediction of color. Although different functionals were found to give the most accurate color prediction for different anthraquinones, it is possible to apply the same DFT approach for the whole set of analyzed dyes. Especially three functionals seem to be valuable, namely mPW1LYP, B1LYP and PBE0 due to very similar spectra predictions. The major source of discrepancies between theoretical and experimental spectra comes from L values, representing the lightness, and the a parameter, depicting the position on green→magenta axis. Fortunately, the agreement between computed and observed blue→yellow axis (parameter b) is very precise in the case of studied anthraquinone dyes in methanol solution. Despite discussed shortcomings, color prediction from first principle quantum chemistry computations can lead to quite satisfactory results, expressed in terms of color space parameters.

  8. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  9. Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.

    PubMed

    Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd

    2018-05-06

    The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this article Preschool children track sources' accuracy across communication mediums - from verbal to text-based modalities and vice versa. Children link the reliability of text-based sources to the reliability of the author. © 2018 The British Psychological Society.

  10. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  11. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  12. Comparison of Model Prediction with Measurements of Galactic Background Noise at L-Band

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, Willam J.; Skou, Niels; Sobjaerg, S.

    2004-01-01

    The spectral window at L-band (1.413 GHz) is important for passive remote sensing of surface parameters such as soil moisture and sea surface salinity that are needed to understand the hydrological cycle and ocean circulation. Radiation from celestial (mostly galactic) sources is strong in this window and an accurate accounting for this background radiation is often needed for calibration. Modem radio astronomy measurements in this spectral window have been converted into a brightness temperature map of the celestial sky at L-band suitable for use in correcting passive measurements. This paper presents a comparison of the background radiation predicted by this map with measurements made with several modem L-band remote sensing radiometers. The agreement validates the map and the procedure for locating the source of down-welling radiation.

  13. Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results

    NASA Technical Reports Server (NTRS)

    Bennett, C. L.; Larson, D.; Weiland, J. L.; Jaorsik, N.; Hinshaw, G.; Odegard, N.; Smith, K. M.; Hill, R. S.; Gold, B.; Halpern, M; hide

    2013-01-01

    We present the final nine-year maps and basic results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission. The full nine-year analysis of the time-ordered data provides updated characterizations and calibrations of the experiment. We also provide new nine-year full sky temperature maps that were processed to reduce the asymmetry of the effective beams. Temperature and polarization sky maps are examined to separate cosmic microwave background (CMB) anisotropy from foreground emission, and both types of signals are analyzed in detail.We provide new point source catalogs as well as new diffuse and point source foreground masks. An updated template-removal process is used for cosmological analysis; new foreground fits are performed, and new foreground reduced are presented.We nowimplement an optimal C(exp -1)1 weighting to compute the temperature angular power spectrum. The WMAP mission has resulted in a highly constrained Lambda-CDM cosmological model with precise and accurate parameters in agreement with a host of other cosmological measurements. When WMAP data are combined with finer scale CMB, baryon acoustic oscillation, and Hubble constant measurements, we find that big bang nucleosynthesis is well supported and there is no compelling evidence for a non-standard number of neutrino species (N(sub eff) = 3.84 +/- 0.40). The model fit also implies that the age of the universe is (sub 0) = 13.772 +/- 0.059 Gyr, and the fit Hubble constant is H(sub 0) = 69.32 +/- 0.80 km/s/ Mpc. Inflation is also supported: the fluctuations are adiabatic, with Gaussian random phases; the detection of a deviation of the scalar spectral index from unity, reported earlier by the WMAP team, now has high statistical significance (n(sub s) = 0.9608+/-0.0080); and the universe is close to flat/Euclidean (Omega = -0.0027+0.0039/-0.0038). Overall, the WMAP mission has resulted in a reduction of the cosmological parameter volume by a factor of 68,000 for the standard six-parameter ?Lambda-CDM model, based on CMB data alone. For a model including tensors, the allowed seven-parameter volume has been reduced by a factor 117,000. Other cosmological observations are in accord with the CMB predictions, and the combined data reduces the cosmological parameter volume even further.With no significant anomalies and an adequate goodness of fit, the inflationary flat Lambda-CDM model and its precise and accurate parameters rooted in WMAP data stands as the standard model of cosmology.

  14. Single frequency stable VCSEL as a compact source for interferometry and vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudzik, Grzegorz; Rzepka, Janusz

    2010-05-28

    Developing an innovative PS-DAVLL (Polarization Switching DAVLL) method of frequency stabilization, which used a ferroelectric liquid crystal cell as quarter wave plate, rubidium cell and developed ultra-stable current source, allowed to obtain a frequency stability of 10{sup -9}(frequency reproducibility of 1,2centre dot10{sup -8}) and reductions in external dimensions of laser source. The total power consumption is only 1,5 Watt. Because stabilization method used in the frequency standard is insensitive to vibration, the semiconductor laser interferometer was built for measuring range over one meter, which can also be used in industry for the accurate measurement of displacements with an accuracy ofmore » 1[mum/m]. Measurements of the VCSEL laser parameters are important from the standpoint of its use in laser interferometry or vibrometry, like narrow emission line DELTAnu{sub FWHM} = 70[MHz] equivalent of this laser type and stability of linear polarization of VCSEL laser. The undoubted advantage of the constructed laser source is the lack of mode-hopping effect during continuous work of VCSEL.« less

  15. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  16. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  17. Ionospheric current source modeling and global geomagnetic induction using ground geomagnetic observatory data

    USGS Publications Warehouse

    Sun, Jin; Kelbert, Anna; Egbert, G.D.

    2015-01-01

    Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.

  18. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  19. Spectral Indices of Faint Radio Sources

    NASA Astrophysics Data System (ADS)

    Gim, Hansung B.; Hales, Christopher A.; Momjian, Emmanuel; Yun, Min Su

    2015-01-01

    The significant improvement in bandwidth and the resultant sensitivity offered by the Karl G. Jansky Very Large Array (VLA) allows us to explore the faint radio source population. Through the study of the radio continuum we can explore the spectral indices of these radio sources. Robust radio spectral indices are needed for accurate k-corrections, for example in the study of the radio - far-infrared (FIR) correlation. We present an analysis of measuring spectral indices using two different approaches. In the first, we use the standard wideband imaging algorithm in the data reduction package CASA. In the second, we use a traditional approach of imaging narrower bandwidths to derive the spectral indices. For these, we simulated data to match the observing parameter space of the CHILES Con Pol survey (Hales et al. 2014). We investigate the accuracy and precision of spectral index measurements as a function of signal-to noise, and explore the requirements to reliably probe possible evolution of the radio-FIR correlation in CHILES Con Pol.

  20. Acoustic emission source localization based on distance domain signal representation

    NASA Astrophysics Data System (ADS)

    Gawronski, M.; Grabowski, K.; Russek, P.; Staszewski, W. J.; Uhl, T.; Packo, P.

    2016-04-01

    Acoustic emission is a vital non-destructive testing technique and is widely used in industry for damage detection, localisation and characterization. The latter two aspects are particularly challenging, as AE data are typically noisy. What is more, elastic waves generated by an AE event, propagate through a structural path and are significantly distorted. This effect is particularly prominent for thin elastic plates. In these media the dispersion phenomenon results in severe localisation and characterization issues. Traditional Time Difference of Arrival methods for localisation techniques typically fail when signals are highly dispersive. Hence, algorithms capable of dispersion compensation are sought. This paper presents a method based on the Time - Distance Domain Transform for an accurate AE event localisation. The source localisation is found through a minimization problem. The proposed technique focuses on transforming the time signal to the distance domain response, which would be recorded at the source. Only, basic elastic material properties and plate thickness are used in the approach, avoiding arbitrary parameters tuning.

  1. FRAGS: estimation of coding sequence substitution rates from fragmentary data

    PubMed Central

    Swart, Estienne C; Hide, Winston A; Seoighe, Cathal

    2004-01-01

    Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802

  2. PROFIT: Bayesian profile fitting of galaxy images

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D. S.; Tobar, R.; Moffett, A.; Driver, S. P.

    2017-04-01

    We present PROFIT, a new code for Bayesian two-dimensional photometric galaxy profile modelling. PROFIT consists of a low-level C++ library (libprofit), accessible via a command-line interface and documented API, along with high-level R (PROFIT) and PYTHON (PyProFit) interfaces (available at github.com/ICRAR/libprofit, github.com/ICRAR/ProFit, and github.com/ICRAR/pyprofit, respectively). R PROFIT is also available pre-built from CRAN; however, this version will be slightly behind the latest GitHub version. libprofit offers fast and accurate two-dimensional integration for a useful number of profiles, including Sérsic, Core-Sérsic, broken-exponential, Ferrer, Moffat, empirical King, point-source, and sky, with a simple mechanism for adding new profiles. We show detailed comparisons between libprofit and GALFIT. libprofit is both faster and more accurate than GALFIT at integrating the ubiquitous Sérsic profile for the most common values of the Sérsic index n (0.5 < n < 8). The high-level fitting code PROFIT is tested on a sample of galaxies with both SDSS and deeper KiDS imaging. We find good agreement in the fit parameters, with larger scatter in best-fitting parameters from fitting images from different sources (SDSS versus KiDS) than from using different codes (PROFIT versus GALFIT). A large suite of Monte Carlo-simulated images are used to assess prospects for automated bulge-disc decomposition with PROFIT on SDSS, KiDS, and future LSST imaging. We find that the biggest increases in fit quality come from moving from SDSS- to KiDS-quality data, with less significant gains moving from KiDS to LSST.

  3. Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model

    NASA Astrophysics Data System (ADS)

    Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.

    2000-02-01

    The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.

  4. The impact of 14nm photomask variability and uncertainty on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-09-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.

  5. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  6. Final Report (OO-ERD-056) MEDIOS: Modeling Earth Deformation Using Interferometric Observations from Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincent, P; Walter, B; Zucca, J

    2002-01-29

    This final report summarizes the accomplishments of the 2-year LDRD-ER project ''MEDIOS: Modeling Earth Deformation using Interferometric Observations from Space'' (00-ERD-056) which began in FY00 and ended in FY01. The structure of this report consists of this summary part plus two separate journal papers, each having their own UCRL number, which document in more detail the major results in two (of three) major categories of this study. The two categories and their corresponding paper titles are (1) Seismic Hazard Mitigation (''Aseismic Creep Events along the Southern San Andreas Fault System''), and (2) Ground-based Nuclear Explosion Monitoring, or GNEM (''New Signaturesmore » of Underground Nuclear Tests Revealed by Satellite Radar Interferometry''). The third category is Energy Exploitation Applications and does not have a separate journal article associated with it but is described briefly. The purpose of this project was to develop a capability within the Geophysics and Global Security Division to process and analyze InSAR data for the purposes of constructing more accurate ground deformation source models relevant to Hazards, Energy, and NAI applications. Once this was accomplished, an inversion tool was to be created that could be applied to many different types (sources) of surface deformation so that accurate source parameters could be determined for a variety of subsurface processes of interest to customers of the GGS Division. This new capability was desired to help attract new project funding for the division.« less

  7. Evaluation of Effective Sources in Uncertainty Measurements of Personal Dosimetry by a Harshaw TLD System

    PubMed Central

    Hosseini Pooya, SM; Orouji, T

    2014-01-01

    Background: The accurate results of the individual doses in personal dosimety which are reported by the service providers in personal dosimetry are very important. There are national / international criteria for acceptable dosimetry system performance. Objective: In this research, the sources of uncertainties are identified, measured and calculated in a personal dosimetry system by TLD. Method: These sources are included; inhomogeneity of TLDs sensitivity, variability of TLD readings due to limited sensitivity and background, energy dependence, directional dependence, non-linearity of the response, fading, dependent on ambient temperature / humidity and calibration errors, which may affect on the dose responses. Some parameters which influence on the above sources of uncertainty are studied for Harshaw TLD-100 cards dosimeters as well as the hot gas Harshaw 6600 TLD reader system. Results: The individual uncertainties of each sources was measured less than 6.7% in 68% confidence level. The total uncertainty was calculated 17.5% with 95% confidence level. Conclusion: The TLD-100 personal dosimeters as well as the Harshaw TLD-100 reader 6600 system show the total uncertainty value which is less than that of admissible value of 42% for personal dosimetry services. PMID:25505769

  8. Study on Material Parameters Identification of Brain Tissue Considering Uncertainty of Friction Coefficient

    NASA Astrophysics Data System (ADS)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng

    2017-10-01

    Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.

  9. Accuracy of self-reported survey data on assisted reproductive technology treatment parameters and reproductive history.

    PubMed

    Stern, Judy E; McLain, Alexander C; Buck Louis, Germaine M; Luke, Barbara; Yeung, Edwina H

    2016-08-01

    It is unknown whether data obtained from maternal self-report for assisted reproductive technology treatment parameters and reproductive history are accurate for use in research studies. We evaluated the accuracy of self-reported in assisted reproductive technology treatment and reproductive history from the Upstate KIDS study in comparison with clinical data reported to the Society for Assisted Reproductive Technology Clinic Outcome Reporting System. Upstate KIDS maternal questionnaire data from deliveries between 2008 and 2010 were linked to data reported to Society for Assisted Reproductive Technology Clinic Outcome Reporting System. The 617 index deliveries were compared as to treatment type (frozen embryo transfer and donor egg or sperm) and use of intracytoplasmic sperm injection and assisted hatching. Use of injectable medications, self-report for assisted reproductive technology, or frozen embryo transfer prior to the index deliveries were also compared. We report agreement in which both sources had yes or both no and sensitivity of maternal report using Society for Assisted Reproductive Technology Clinic Outcome Reporting System as the gold standard. Significance was determined using χ(2) at P < 0.05. Universal agreement was not reached on any parameter but was best for treatment type of frozen embryo transfer (agreement, 96%; sensitivity, 93%) and use of donor eggs (agreement, 97%; sensitivity, 82%) or sperm (agreement, 98%; sensitivity, 82%). Use of intracytoplasmic sperm injection (agreement, 78%: sensitivity, 78%) and assisted hatching (agreement, 57%; sensitivity, 38%) agreed less well with self-reported use (P < .0001). In vitro fertilization (agreement, 82%) and frozen embryo transfer (agreement, 90%) prior to the index delivery were more consistently reported than was use of injectable medication (agreement, 76%) (P < .0001). Women accurately report in vitro fertilization treatment but are less accurate about procedures handled in the laboratory (intracytoplasmic sperm injection or assisted hatching). Clinics might better communicate with patients on the use of these procedures, and researchers should use caution when using self-reported treatment data. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A momentum source model for wire-wrapped rod bundles—Concept, validation, and application

    DOE PAGES

    Hu, Rui; Fanning, Thomas H.

    2013-06-19

    Large uncertainties still exist in the treatment of wire-spacers and drag models for momentum transfer in current lumped parameter models. Here, to improve the hydraulic modeling of wire-wrap spacers in a rod bundle, a three-dimensional momentum source model (MSM) has been developed to model the anisotropic flow without the need to resolve the geometric details of the wire-wraps. The MSM is examined for 7-pin and 37-pin bundles steady-state simulations using the commercial CFD code STAR-CCM+. The calculated steady-state inter-subchannel cross flow velocities match very well in comparisons between bare bundles with the MSM applied and the wire-wrapped bundles with explicitmore » geometry. The validity of the model is further verified by mesh and parameter sensitivity studies. Furthermore, the MSM is applied to a 61-pin EBR-II experimental subassembly for both steady state and PLOF transient simulations. Reasonably accurate predictions of temperature, pressure, and fluid flow velocities have been achieved using the MSM for both steady-state and transient conditions. Significant computing resources are saved with the MSM since it can be used on a much coarser computational mesh.« less

  11. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  12. Green-ampt infiltration parameters in riparian buffers

    Treesearch

    L.M. Stahr; D.E. Eisenhauer; M.J. Helmers; Mike G. Dosskey; T.G. Franti

    2004-01-01

    Riparian buffers can improve surface water quality by filtering contaminants from runoff before they enter streams. Infiltration is an important process in riparian buffers. Computer models are often used to assess the performance of riparian buffers. Accurate prediction of infiltration by these models is dependent upon accurate estimates of infiltration parameters....

  13. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm.

    PubMed

    Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. PRECISION INTEGRATOR FOR MINUTE ELECTRIC CURRENTS

    DOEpatents

    Hemmendinger, A.; Helmer, R.J.

    1961-10-24

    An integrator is described for measuring the value of integrated minute electrical currents. The device consists of a source capacitor connected in series with the source of such electrical currents, a second capacitor of accurately known capacitance and a source of accurately known and constant potential, means responsive to the potentials developed across the source capacitor for reversibly connecting the second capacitor in series with the source of known potential and with the source capacitor and at a rate proportional to the potential across the source capacitor to maintain the magnitude of the potential across the source capacitor at approximately zero. (AEC)

  15. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  16. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Optimization of physiological parameter for macroscopic modeling of reacted singlet oxygen concentration in an in-vivo model

    NASA Astrophysics Data System (ADS)

    Wang, Ken Kang-Hsin; Busch, Theresa M.; Finlay, Jarod C.; Zhu, Timothy C.

    2009-02-01

    Singlet oxygen (1O2) is generally believed to be the major cytotoxic agent during photodynamic therapy (PDT), and the reaction between 1O2 and tumor cells define the treatment efficacy. From a complete set of the macroscopic kinetic equations which describe the photochemical processes of PDT, we can express the reacted 1O2 concentration, [1O2]rx, in a form related to time integration of the product of 1O2 quantum yield and the PDT dose rate. The production of [1O2]rx involves physiological and photophysical parameters which need to be determined explicitly for the photosensitizer of interest. Once these parameters are determined, we expect the computed [1O2]rx to be an explicit dosimetric indicator for clinical PDT. Incorporating the diffusion equation governing the light transport in turbid medium, the spatially and temporally-resolved [1O2]rx described by the macroscopic kinetic equations can be numerically calculated. A sudden drop of the calculated [1O2]rx along with the distance following the decrease of light fluence rate is observed. This suggests that a possible correlation between [1O2]rx and necrosis boundary may occur in the tumor subject to PDT irradiation. In this study, we have theoretically examined the sensitivity of the physiological parameter from two clinical related conditions: (1) collimated light source on semi-infinite turbid medium and (2) linear light source in turbid medium. In order to accurately determine the parameter in a clinical relevant environment, the results of the computed [1O2]rx are expected to be used to fit the experimentally-measured necrosis data obtained from an in vivo animal model.

  18. A more accurate method using MOVES (Motor Vehicle Emission Simulator) to estimate emission burden for regional-level analysis.

    PubMed

    Liu, Xiaobo

    2015-07-01

    The U.S. Environmental Protection Agency's (EPA) Motor Vehicle Emission Simulator (MOVES) is required by the EPA to replace Mobile 6 as an official on-road emission model. Incorporated with annual vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) vehicle class, MOVES allocates VMT from HPMS to MOVES source (vehicle) types and calculates emission burden by MOVES source type. However, the calculated running emission burden by MOVES source type may be deviated from the actual emission burden because of MOVES source population, specifically the population fraction by MOVES source type in HPMS vehicle class. The deviation is also the result of the use of the universal set of parameters, i.e., relative mileage accumulation rate (relativeMAR), packaged in MOVES default database. This paper presents a novel approach by adjusting the relativeMAR to eliminate the impact of MOVES source population on running exhaust emission and to keep start and evaporative emissions unchanged for both MOVES2010b and MOVES2014. Results from MOVES runs using this approach indicated significant improvements on VMT distribution and emission burden estimation for each MOVES source type. The deviation of VMT by MOVES source type is minimized by using this approach from 12% to less than 0.05% for MOVES2010b and from 50% to less than 0.2% for MOVES2014 except for MOVES source type 53. Source type 53 still remains about 30% variation. The improvement of VMT distribution results in the elimination of emission burden deviation for each MOVES source type. For MOVES2010b, the deviation of emission burdens decreases from -12% for particulate matter less than 2.5 μm (PM2.5) and -9% for carbon monoxide (CO) to less than 0.002%. For MOVES2014, it drops from 80% for CO and 97% for PM2.5 to 0.006%. This approach is developed to more accurately estimate the total emission burdens using EPA's MOVES, both MOVES2010b and MOVES2014, by redistributing vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) class to MOVES source type on the basis of comprehensive traffic study, local link-by-link VMT broken down into MOVES source type.

  19. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244

  20. Measurement and Validation of Bidirectional Reflectance of Space Shuttle and Space Station Materials for Computerized Lighting Models

    NASA Technical Reports Server (NTRS)

    Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James

    1997-01-01

    Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.

  1. An Improved Method for Seismic Event Depth and Moment Tensor Determination: CTBT Related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J.; Rozhkov, M.; Baker, B.

    2016-12-01

    According to the Protocol to CTBT, International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event. Determination of seismic event source mechanism and its depth is a part of these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. We show preliminary results using the latter approach from an improved software design and applied on a moderately powered computer. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK 2009, 2013 and 2016 events and shallow earthquakes using a new implementation of waveform fitting of teleseismic P waves. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Moment tensors for DPRK events show isotropic percentages greater than 50%. Depth estimates for the DPRK events range from 1.0-1.4 km. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  2. A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.

    We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less

  3. Visible Light Image-Based Method for Sugar Content Classification of Citrus

    PubMed Central

    Wang, Xuefeng; Wu, Chunyan; Hirafuji, Masayuki

    2016-01-01

    Visible light imaging of citrus fruit from Mie Prefecture of Japan was performed to determine whether an algorithm could be developed to predict the sugar content. This nondestructive classification showed that the accurate segmentation of different images can be realized by a correlation analysis based on the threshold value of the coefficient of determination. There is an obvious correlation between the sugar content of citrus fruit and certain parameters of the color images. The selected image parameters were connected by addition algorithm. The sugar content of citrus fruit can be predicted by the dummy variable method. The results showed that the small but orange citrus fruits often have a high sugar content. The study shows that it is possible to predict the sugar content of citrus fruit and to perform a classification of the sugar content using light in the visible spectrum and without the need for an additional light source. PMID:26811935

  4. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  6. CHARMM Force-Fields with Modified Polyphosphate Parameters Allow Stable Simulation of the ATP-Bound Structure of Ca(2+)-ATPase.

    PubMed

    Komuro, Yasuaki; Re, Suyong; Kobayashi, Chigusa; Muneyuki, Eiro; Sugita, Yuji

    2014-09-09

    Adenosine triphosphate (ATP) is an indispensable energy source in cells. In a wide variety of biological phenomena like glycolysis, muscle contraction/relaxation, and active ion transport, chemical energy released from ATP hydrolysis is converted to mechanical forces to bring about large-scale conformational changes in proteins. Investigation of structure-function relationships in these proteins by molecular dynamics (MD) simulations requires modeling of ATP in solution and ATP bound to proteins with accurate force-field parameters. In this study, we derived new force-field parameters for the triphosphate moiety of ATP based on the high-precision quantum calculations of methyl triphosphate. We tested our new parameters on membrane-embedded sarcoplasmic reticulum Ca(2+)-ATPase and four soluble proteins. The ATP-bound structure of Ca(2+)-ATPase remains stable during MD simulations, contrary to the outcome in shorter simulations using original parameters. Similar results were obtained with the four ATP-bound soluble proteins. The new force-field parameters were also tested by investigating the range of conformations sampled during replica-exchange MD simulations of ATP in explicit water. Modified parameters allowed a much wider range of conformational sampling compared with the bias toward extended forms with original parameters. A diverse range of structures agrees with the broad distribution of ATP conformations in proteins deposited in the Protein Data Bank. These simulations suggest that the modified parameters will be useful in studies of ATP in solution and of the many ATP-utilizing proteins.

  7. Comment on ‘Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion’

    NASA Astrophysics Data System (ADS)

    Mustafaev, A. S.; Sukhomlinov, V. S.; Timofeev, N. A.

    2018-03-01

    This Comment is devoted to some mathematical inaccuracies made by the authors of the paper ‘Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion’ (Plasma Sources Science and Technology 26 055003). In the Comment, we show that the diapason of plasma parameters for the validity of the theoretical results obtained by the authors was defined incorrectly; we made a more accurate definition of this diapason. As a result, we show that it is impossible to confirm or refute the feasibility of the Bohm kinetic criterion on the basis of the data of the cited paper.

  8. Improved optical axis determination accuracy for fiber-based polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Matcher, Stephen J.

    2013-03-01

    We report on a new calibration technique that permits the accurate extraction of sample Jones matrix and hence fast-axis orientation by using fiber-based polarization-sensitive optical coherence tomography (PS-OCT) that is completely based on non polarization maintaining fiber such as SMF-28. In this technique, two quarter waveplates are used to completely specify the parameters of the system fibers in the sample arm so that the Jones matrix of the sample can be determined directly. The device was validated on measurements of a quarter waveplate and an equine tendon sample by a single-mode fiber-based swept-source PS-OCT system.

  9. [The choice of color in fixed prosthetics: what steps should be followed for a reliable outcome?].

    PubMed

    Vanheusden, Alain; Mainjot, Amélie

    2004-01-01

    The creation of a perfectly-matched esthetic fixed restoration is undeniably one of the most difficult challenges in modern dentistry. The final outcome depends on several essential steps: the use of an appropriate light source, the accurate analysis and correct evaluation of patient's teeth parameters (morphology, colour, surface texture,...), the clear and precise transmission of this data to the laboratory and the sound interpretation of it by a dental technician who masters esthetic prosthetic techniques perfectly. The purpose of this paper was to give a reproducible clinical method to the practitioner in order to achieve a reliable dental colorimetric analysis.

  10. Performance Impact of Deflagration to Detonation Transition Enhancing Obstacles

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Schauer, Frederick; Hopper, David

    2012-01-01

    A sub-model is developed to account for the drag and heat transfer enhancement resulting from deflagration-to-detonation (DDT) inducing obstacles commonly used in pulse detonation engines (PDE). The sub-model is incorporated as a source term in a time-accurate, quasi-onedimensional, CFD-based PDE simulation. The simulation and sub-model are then validated through comparison with a particular experiment in which limited DDT obstacle parameters were varied. The simulation is then used to examine the relative contributions from drag and heat transfer to the reduced thrust which is observed. It is found that heat transfer is far more significant than aerodynamic drag in this particular experiment.

  11. Black Hole growth and star formation activity in the CDFS

    NASA Astrophysics Data System (ADS)

    Brusa, Marcella; Fiore, Fabrizio

    2010-07-01

    We present a study of the properties of obscured Active Galactic Nuclei (AGN) detected in the CDFS 1Ms observation and their host galaxies. We limited the analysis to the MUSIC area, for which deep K-band observations obtained with ISAACatVLT are available, ensuring accurate identifications of the counterparts of the X-ray sources as well as reliable determination of photometric redshifts and galaxy parameters, such as stellar masses and star formation rates. Among other findings, we found that the X-ray selected AGN fraction increases with the stellar mass up to a value of 30% at z>1 and M*>3×1011 M.

  12. Parmeterization of spectra

    NASA Technical Reports Server (NTRS)

    Cornish, C. R.

    1983-01-01

    Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.

  13. Determining the Stellar Initial Mass by Means of the 17O/18O Ratio on the AGB

    NASA Astrophysics Data System (ADS)

    De Nutte, Rutger; Decin, Leen; Olofsson, Hans; de Koter, Alex; Karakas, Amanda; Lombaert, Robin; Milam, Stefanie; Ramstedt, Sofia; Stancliffe, Richard; Homan, Ward; Van de Sande, Marie

    2016-07-01

    This poster presentsnewly obtainedcircumstellar 12C17O and 12C18O line observations, from which theline intensity are then related directly tothe 17O/18O surface abundance ratiofor a sample of nine AGB stars covering the three spectral types ().These ratios are evaluated in relation to a fundamental stellar evolution parameters: the stellar initial mass. The17O/18O ratio is shown to function as an effective method of determining the initial stellar mass. Through comparison with predictions bystellar evolution models, accurate initial mass estimates are calculated for all nine sources.

  14. Uncertainty analysis of scintillometers methods in measuring sensible heat fluxes of forest ecosystem

    NASA Astrophysics Data System (ADS)

    Zheng, N.

    2017-12-01

    Sensible heat flux (H) is one of the driving factors of surface turbulent motion and energy exchange. Therefore, it is particularly important to measure sensible heat flux accurately at the regional scale. However, due to the heterogeneity of the underlying surface, hydrothermal regime, and different weather conditions, it is difficult to estimate the represented flux at the kilometer scale. The scintillometer have been developed into an effective and universal equipment for deriving heat flux at the regional-scale which based on the turbulence effect of light in the atmosphere since the 1980s. The parameter directly obtained by the scintillometer is the structure parameter of the refractive index of air based on the changes of light intensity fluctuation. Combine with parameters such as temperature structure parameter, zero-plane displacement, surface roughness, wind velocity, air temperature and the other meteorological data heat fluxes can be derived. These additional parameters increase the uncertainties of flux because the difference between the actual feature of turbulent motion and the applicable conditions of turbulence theory. Most previous studies often focused on the constant flux layers that are above the rough sub-layers and homogeneous flat surfaces underlying surfaces with suitable weather conditions. Therefore, the criteria and modified forms of key parameters are invariable. In this study, we conduct investment over the hilly area of northern China with different plants, such as cork oak, cedar-black and locust. On the basis of key research on the threshold and modified forms of saturation with different turbulence intensity, modified forms of Bowen ratio with different drying-and-wetting conditions, universal function for the temperature structure parameter under different atmospheric stability, the dominant sources of uncertainty will be determined. The above study is significant to reveal influence mechanism of uncertainty and explore influence degree of uncertainty with quantitative analysis. The study can provide theoretical basis and technical support for accurately measuring sensible heat fluxes of forest ecosystem with scintillometer method, and can also provide work foundation for further study on role of forest ecosystem in energy balance and climate change.

  15. Effects of noise levels and call types on the source levels of killer whale calls.

    PubMed

    Holt, Marla M; Noren, Dawn P; Emmons, Candice K

    2011-11-01

    Accurate parameter estimates relevant to the vocal behavior of marine mammals are needed to assess potential effects of anthropogenic sound exposure including how masking noise reduces the active space of sounds used for communication. Information about how these animals modify their vocal behavior in response to noise exposure is also needed for such assessment. Prior studies have reported variations in the source levels of killer whale sounds, and a more recent study reported that killer whales compensate for vessel masking noise by increasing their call amplitude. The objectives of the current study were to investigate the source levels of a variety of call types in southern resident killer whales while also considering background noise level as a likely factor related to call source level variability. The source levels of 763 discrete calls along with corresponding background noise were measured over three summer field seasons in the waters surrounding the San Juan Islands, WA. Both noise level and call type were significant factors on call source levels (1-40 kHz band, range of 135.0-175.7 dB(rms) re 1 [micro sign]Pa at 1 m). These factors should be considered in models that predict how anthropogenic masking noise reduces vocal communication space in marine mammals.

  16. Improving land surface parameter retrieval by integrating plant traits priors in the MULTIPLY data assimilation platform

    NASA Astrophysics Data System (ADS)

    Corbin, A. E.; Timmermans, J.; Hauser, L.; Bodegom, P. V.; Soudzilovskaia, N. A.

    2017-12-01

    There is a growing demand for accurate land surface parameterization from remote sensing (RS) observations. This demand has not been satisfied, because most estimation schemes apply 1) a single-sensor single-scale approach, and 2) require specific key-variables to be `guessed'. This is because of the relevant observational information required to accurately retrieve parameters of interest. Consequently, many schemes assume specific variables to be constant or not present; subsequently leading to more uncertainty. In this aspect, the MULTIscale SENTINEL land surface information retrieval Platform (MULTIPLY) was created. MULTIPLY couples a variety of RS sources with Radiative Transfer Models (RTM) over varying spectral ranges using data-assimilation to estimate geophysical parameters. In addition, MULTIPLY also uses prior information about the land surface to constrain the retrieval problem. This research aims to improve the retrieval of plant biophysical parameters through the use of priors of biophysical parameters/plant traits. Of particular interest are traits (physical, morphological or chemical trait) affecting individual performance and fitness of species. Plant traits that are able to be retrieved via RS and with RTMs include traits such as leaf-pigments, leaf water, LAI, phenols, C/N, etc. In-situ data for plant traits that are retrievable via RS techniques were collected for a meta-analysis from databases such as TRY, Ecosis, and individual collaborators. Of particular interest are the following traits: chlorophyll, carotenoids, anthocyanins, phenols, leaf water, and LAI. ANOVA statistics were generated for each traits according to species, plant functional groups (such as evergreens, grasses, etc.), and the trait itself. Afterwards, traits were also compared using covariance matrices. Using these as priors, MULTIPLY was is used to retrieve several plant traits in two validation sites in the Netherlands (Speulderbos) and in Finland (Sodankylä). Initial comparisons show significant improved results over non-a priori based retrievals.

  17. A partial exponential lumped parameter model to evaluate groundwater age distributions and nitrate trends in long-screened wells

    USGS Publications Warehouse

    Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.

    2016-01-01

    A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.

  18. Environmental risk assessment of water quality in harbor areas: a new methodology applied to European ports.

    PubMed

    Gómez, Aina G; Ondiviela, Bárbara; Puente, Araceli; Juanes, José A

    2015-05-15

    This work presents a standard and unified procedure for assessment of environmental risks at the contaminant source level in port aquatic systems. Using this method, port managers and local authorities will be able to hierarchically classify environmental hazards and proceed with the most suitable management actions. This procedure combines rigorously selected parameters and indicators to estimate the environmental risk of each contaminant source based on its probability, consequences and vulnerability. The spatio-temporal variability of multiple stressors (agents) and receptors (endpoints) is taken into account to provide accurate estimations for application of precisely defined measures. The developed methodology is tested on a wide range of different scenarios via application in six European ports. The validation process confirms its usefulness, versatility and adaptability as a management tool for port water quality in Europe and worldwide. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. How and Why to Do VLBI on GPS

    NASA Technical Reports Server (NTRS)

    Dickey, J. M.

    2010-01-01

    In order to establish the position of the center of mass of the Earth in the International Celestial Reference Frame, observations of the Global Positioning Satellite (GPS) constellation using the IVS network are important. With a good frame-tie between the coordinates of the IVS telescopes and nearby GPS receivers, plus a common local oscillator reference signal, it should be possible to observe and record simultaneously signals from the astrometric calibration sources and the GPS satellites. The standard IVS solution would give the atmospheric delay and clock offsets to use in analysis of the GPS data. Correlation of the GPS signals would then give accurate orbital parameters of the satellites in the ICRF reference frame, i.e., relative to the positions of the astrometric sources. This is particularly needed to determine motion of the center of mass of the earth along the rotation axis.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brewer, Brendon J.; Foreman-Mackey, Daniel; Hogg, David W., E-mail: bj.brewer@auckland.ac.nz

    We present and implement a probabilistic (Bayesian) method for producing catalogs from images of stellar fields. The method is capable of inferring the number of sources N in the image and can also handle the challenges introduced by noise, overlapping sources, and an unknown point-spread function. The luminosity function of the stars can also be inferred, even when the precise luminosity of each star is uncertain, via the use of a hierarchical Bayesian model. The computational feasibility of the method is demonstrated on two simulated images with different numbers of stars. We find that our method successfully recovers the inputmore » parameter values along with principled uncertainties even when the field is crowded. We also compare our results with those obtained from the SExtractor software. While the two approaches largely agree about the fluxes of the bright stars, the Bayesian approach provides more accurate inferences about the faint stars and the number of stars, particularly in the crowded case.« less

  1. Impact of random discrete dopant in extension induced fluctuation in gate-source/drain underlap FinFET

    NASA Astrophysics Data System (ADS)

    Wang, Yijiao; Huang, Peng; Xin, Zheng; Zeng, Lang; Liu, Xiaoyan; Du, Gang; Kang, Jinfeng

    2014-01-01

    In this work, three dimensional technology computer-aided design (TCAD) simulations are performed to investigate the impact of random discrete dopant (RDD) including extension induced fluctuation in 14 nm silicon-on-insulator (SOI) gate-source/drain (G-S/D) underlap fin field effect transistor (FinFET). To fully understand the RDD impact in extension, RDD effect is evaluated in channel and extension separately and together. The statistical variability of FinFET performance parameters including threshold voltage (Vth), subthreshold slope (SS), drain induced barrier lowering (DIBL), drive current (Ion), and leakage current (Ioff) are analyzed. The results indicate that RDD in extension can lead to substantial variability, especially for SS, DIBL, and Ion and should be taken into account together with that in channel to get an accurate estimation on RDF. Meanwhile, higher doping concentration of extension region is suggested from the perspective of overall variability control.

  2. A rapid compatibility analysis of potential offshore sand sources for beaches of the Santa Barbara Littoral Cell

    USGS Publications Warehouse

    Mustain, N.; Griggs, G.; Barnard, P.L.

    2007-01-01

    The beaches of the Santa Barbara Littoral Cell, which are narrow as a result of either natural and/or anthropogenic factors, may benefit from nourishment. Sand compatibility is fundamental to beach nourishment success and grain size is the parameter often used to evaluate equivalence. Only after understanding which sand sizes naturally compose beaches in a specific cell, especially the smallest size that remains on the beach, can the potential compatibility of source areas, such as offshore borrow sites, be accurately assessed. This study examines sediments on the beach and in the nearshore (5-20m depth) for the entire Santa Barbara Littoral Cell east of Point Conception. A digital bed sediment camera, the Eyeball??, and spatial autocorrelation technique were used to determine sediment grain size. Here we report on whether nearshore sediments are comparable and compatible with beach sands of the Santa Barbara Littoral Cell. ?? 2007 ASCE.

  3. Characterization of the new neutron imaging and materials science facility IMAT

    NASA Astrophysics Data System (ADS)

    Minniti, Triestino; Watanabe, Kenichi; Burca, Genoveva; Pooley, Daniel E.; Kockelmann, Winfried

    2018-04-01

    IMAT is a new cold neutron imaging and diffraction instrument located at the second target station of the pulsed neutron spallation source ISIS, UK. A broad range of materials science and materials testing areas will be covered by IMAT. We present the characterization of the imaging part, including the energy-selective and energy-dispersive imaging options, and provide the basic parameters of the radiography and tomography instrument. In particular, detailed studies on mono and bi-dimensional neutron beam flux profiles, neutron flux as a function of the neutron wavelength, spatial and energy dependent neutron beam uniformities, guide artifacts, divergence and spatial resolution, and neutron pulse widths are provided. An accurate characterization of the neutron beam at the sample position, located 56 m from the source, is required to optimize collection of radiographic and tomographic data sets and for performing energy-dispersive neutron imaging via time-of-flight methods in particular.

  4. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-01

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  5. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids.

    PubMed

    Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-16

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  6. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  7. 3D Monte Carlo model with direct photon flux recording for optimal optogenetic light delivery

    NASA Astrophysics Data System (ADS)

    Shin, Younghoon; Kim, Dongmok; Lee, Jihoon; Kwon, Hyuk-Sang

    2017-02-01

    Configuring the light power emitted from the optical fiber is an essential first step in planning in-vivo optogenetic experiments. However, diffusion theory, which was adopted for optogenetic research, precluded accurate estimates of light intensity in the semi-diffusive region where the primary locus of the stimulation is located. We present a 3D Monte Carlo model that provides an accurate and direct solution for light distribution in this region. Our method directly records the photon trajectory in the separate volumetric grid planes for the near-source recording efficiency gain, and it incorporates a 3D brain mesh to support both homogeneous and heterogeneous brain tissue. We investigated the light emitted from optical fibers in brain tissue in 3D, and we applied the results to design optimal light delivery parameters for precise optogenetic manipulation by considering the fiber output power, wavelength, fiber-to-target distance, and the area of neural tissue activation.

  8. Morphological characterization of coral reefs by combining lidar and MBES data: A case study from Yuanzhi Island, South China Sea

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yang, Fanlin; Zhang, Hande; Su, Dianpeng; Li, QianQian

    2017-06-01

    The correlation between seafloor morphological features and biological complexity has been identified in numerous recent studies. This research focused on the potential for accurate characterization of coral reefs based on high-resolution bathymetry from multiple sources. A standard deviation (STD) based method for quantitatively characterizing terrain complexity was developed that includes robust estimation to correct for irregular bathymetry and a calibration for the depth-dependent variablity of measurement noise. Airborne lidar and shipborne sonar bathymetry measurements from Yuanzhi Island, South China Sea, were merged to generate seamless high-resolution coverage of coral bathymetry from the shoreline to deep water. The new algorithm was applied to the Yuanzhi Island surveys to generate maps of quantitive terrain complexity, which were then compared to in situ video observations of coral abundance. The terrain complexity parameter is significantly correlated with seafloor coral abundance, demonstrating the potential for accurately and efficiently mapping coral abundance through seafloor surveys, including combinations of surveys using different sensors.

  9. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  10. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery coefficients in the reconstructed images. To avoid the appearance of ring-type artifacts, the number of iterations should be limited. In low magnification systems, the intrinsic detector PSF plays a major role in improvement of the image-quality parameters.« less

  11. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  12. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  13. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  14. Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea

    We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less

  15. Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting

    DOE PAGES

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...

    2017-04-29

    We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less

  16. Asymptotic solutions for the case of nearly symmetric gravitational lens systems

    NASA Astrophysics Data System (ADS)

    Wertz, O.; Pelgrims, V.; Surdej, J.

    2012-08-01

    Gravitational lensing provides a powerful tool to determine the Hubble parameter H0 from the measurement of the time delay Δt between two lensed images of a background variable source. Nevertheless, knowledge of the deflector mass distribution constitutes a hurdle. We propose in the present work interesting solutions for the case of nearly symmetric gravitational lens systems. For the case of a small misalignment between the source, the deflector and the observer, we first consider power-law (ɛ) axially symmetric models for which we derive an analytical relation between the amplification ratio and source position which is independent of the power-law slope ɛ. According to this relation, we deduce an expression for H0 also irrespective of the value ɛ. Secondly, we consider the power-law axially symmetric lens models with an external large-scale gravitational field, the shear γ, resulting in the so-called ɛ-γ models, for which we deduce simple first-order equations linking the model parameters and the lensed image positions, the latter being observable quantities. We also deduce simple relations between H0 and observables quantities only. From these equations, we may estimate the value of the Hubble parameter in a robust way. Nevertheless, comparison between the ɛ-γ and singular isothermal ellipsoid (SIE) models leads to the conclusion that these models remain most often distinct. Therefore, even for the case of a small misalignment, use of the first-order equations and precise astrometric measurements of the positions of the lensed images with respect to the centre of the deflector enables one to discriminate between these two families of models. Finally, we confront the models with numerical simulations to evaluate the intrinsic error of the first-order expressions used when deriving the model parameters under the assumption of a quasi-alignment between the source, the deflector and the observer. From these same simulations, we estimate for the case of the ɛ-γ family of models that the standard deviation affecting H0 is ? which merely reflects the adopted astrometric uncertainties on the relative image positions, typically ? arcsec. In conclusions, we stress the importance of getting very accurate measurements of the relative positions of the multiple lensed images and of the time delays for the case of nearly symmetric gravitational lens systems, in order to derive robust and precise values of the Hubble parameter.

  17. Estimating the effective system dead time parameter for correlated neutron counting

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.

    2017-11-01

    Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. This latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.

  18. Development of mine explosion ground truth smart sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Steven R.; Harben, Phillip E.; Jarpe, Steve

    Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form themore » basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business plan). Key parameters to be automatically determined are event origin time (within 0.1 sec), location (within 1 km) and size (within 0.3 magnitude units) without any human intervention. The key parameter ground truth information from explosions greater than magnitude 2.5 will be transmitted to a recording and transmitting site. Because we have identified a limited bandwidth, inexpensive two-way satellite communication (ORBCOMM), we have devised the concept of an accompanying Ground-Truth Processing Center that would enable calibration and ground-truth accuracy to improve over the duration of a deployment.« less

  19. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  20. Towards a comprehensive knowledge of the open cluster Haffner 9

    NASA Astrophysics Data System (ADS)

    Piatti, Andrés E.

    2017-03-01

    We turn our attention to Haffner 9, a Milky Way open cluster whose previous fundamental parameter estimates are far from being in agreement. In order to provide with accurate estimates, we present high-quality Washington CT1 and Johnson BVI photometry of the cluster field. We put particular care in statistically cleaning the colour-magnitude diagrams (CMDs) from field star contamination, which was found a common source in previous works for the discordant fundamental parameter estimates. The resulting cluster CMD fiducial features were confirmed from a proper motion membership analysis. Haffner 9 is a moderately young object (age ∼350 Myr), placed in the Perseus arm - at a heliocentric distance of ∼3.2 kpc - , with a lower limit for its present mass of ∼160 M⊙ and of nearly metal solar content. The combination of the cluster structural and fundamental parameters suggest that it is in an advanced stage of internal dynamical evolution, possibly in the phase typical of those with mass segregation in their core regions. However, the cluster still keeps its mass function close to that of the Salpeter's law.

  1. Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.

    PubMed

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2017-05-26

    In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.

  2. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic ground motions obtained using the EGF method agree well with the observed motions in terms of acceleration, velocity, and displacement within the frequency range of 0.3-10 Hz. These findings indicate that the 2016 Kumamoto earthquake is a standard event that follows the scaling relationship of crustal earthquakes in Japan.

  3. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations tomore » the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of RM-step thickness is required for accurate parameterization of the effective SAD. The GBD energy spread is given by a linear function of the exponential of the beam energy. Except for a few outliers, the measured parameters match the GBD within the specified tolerances in all of the four rooms investigated. For a SOBP field with a range of 15 g/cm{sup 2} and an air gap of 25 cm, the maximum difference in the 80%–20% lateral penumbra between the GBD-commissioned treatment-planning system and measurements in any of the four rooms is 0.5 mm. Conclusions: The beam model parameters of the double-scattering system can be parameterized with a limited set of equations and parameters. This GBD closely matches the measured dosimetric properties in four different rooms.« less

  4. Efficient Third Harmonic Generation for Wind Lidar Applications

    NASA Technical Reports Server (NTRS)

    Mordaunt, David W.; Cheung, Eric C.; Ho, James G.; Palese, Stephen P.

    1998-01-01

    The characterization of atmospheric winds on a global basis is a key parameter required for accurate weather prediction. The use of a space based lidar system for remote measurement of wind speed would provide detailed and highly accurate data for future weather prediction models. This paper reports the demonstration of efficient third harmonic conversion of a 1 micrometer laser to provide an ultraviolet (UV) source suitable for a wind lidar system based on atmospheric molecular scattering. Although infrared based lidars using aerosol scattering have been demonstrated to provide accurate wind measurement, a UV based system using molecular or Rayleigh scattering will provide accurate global wind measurements, even in those areas of the atmosphere where the aerosol density is too low to yield good infrared backscatter signals. The overall objective of this work is to demonstrate the maturity of the laser technology and its suitability for a near term flight aboard the space shuttle. The laser source is based on diode-pumped solid-state laser technology which has been extensively demonstrated at TRW in a variety of programs and internal development efforts. The pump laser used for the third harmonic demonstration is a breadboard system, designated the Laser for Risk Reduction Experiments (LARRE), which has been operating regularly for over 5 years. The laser technology has been further refined in an engineering model designated as the Compact Advanced Pulsed Solid-State Laser (CAPSSL), in which the laser head was packaged into an 8 x 8 x 18 inch volume with a weight of approximately 61 pounds. The CAPSSL system is a ruggedized configuration suitable for typical military applications. The LARRE and CAPSSL systems are based on Nd:YAG with an output wavelength of 1064 nm. The current work proves the viability of converting the Nd:YAG fundamental to the third harmonic wavelength at 355 nm for use in a direct detection wind lidar based on atmospheric Rayleigh scattering.

  5. Accuracy of telephone reference service in health sciences libraries.

    PubMed Central

    Paskoff, B M

    1991-01-01

    Six factual queries were unobtrusively telephoned to fifty-one U.S. academic health sciences and hospital libraries. The majority of the queries (63.4%) were answered accurately. Referrals to another library or information source were made for 25.2% of the queries. Eleven answers (3.6%) were inaccurate, and no answer was provided for 7.8% of the queries. There was a correlation between the number of accurate answers provided and the presence of at least one staff member with a master's degree in library and information science. The correlation between employing a librarian certified by the Medical Library Association (MLA) and providing accurate answers was significant. The majority of referrals were to specific sources. If these "helpful referrals" are counted with accurate answers as correct responses, they total 76.8% of the answers. In a follow-up survey, five libraries stated that they did not provide accurate answers because they did not own an appropriate source. Staff-related problems were given as reasons for other than accurate answers by two of the libraries, while eight indicated that library policy prevented them from providing answers to the public. PMID:2039904

  6. Earthquake source parameters determined using the SAFOD Pilot Hole vertical seismic array

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Ellsworth, W. L.; Prejean, S. G.

    2003-12-01

    We determined source parameters of microearthquakes occurring at Parkfield, CA, using the SAFOD Pilot Hole vertical seismic array. The array consists of 32 stations with 3-component 15 Hz geophones at 40 meter spacing (856 to 2096 m depth) The site is about 1.8 km southwest of a segment of the San Andreas fault characterized by a combination of aseismic creep and repeating microearthquakes. We analyzed seismograms recorded at sample rates of 1kHz or 2kHz. Spectra have high signal-to-noise ratios at frequencies up to 300-400 Hz, showing these data include information on source processes of microearthquakes. By comparing spectra and waveforms at different levels of the array, we observe how attenuation and scattering in the shallow crust affect high-frequency waves. We estimated spectral level (Ω 0), corner frequency (fc) and path-averaged attenuation (Q) at each level of the array by fitting an omega squared model to displacement spectra. While the spectral level changes smoothly with depth, there is significant scatter in fc and Q due to the strong trade-off between these parameters. Because we expect source parameters to vary systematically with depth, we impose a smoothness constraint on Q, Ω 0 and fc as a function of depth. For some of the nearby events, take-off angles to the different levels of the array span a significant part of the focal sphere. Therefore corner frequencies should also change with depth. We smooth measurements using a linear first-difference operator that links Q, Ω 0 and fc at one level to the levels above and below, and use Akaike_fs Bayesian Information Criterion (ABIC) to weight the smoothing operators. We applied this approach to events with high signal-to-noise ratios. For the results with the minimum ABIC, fc does not scatter and Q decreases with decreasing depth. Seismic moments were determined by the spectral level and range from 109 and 1012 Nm. Source radii were estimated from the corner frequency using the circular crack model of Sato and Hirasawa (1973). Estimated values of static stress drop were roughly 1 MPa and do not vary with seismic moment. Q values from all earthquakes were averaged at each level of the array. Average Qp and Qs range from 250 to 350 and from 300 to 400 between the top and bottom of the array, respectively. Increasing Q values as a function of depth explain well the observed decrease in high-frequency content as waves propagate toward the surface. Thus, by jointly analyzing the entire vertical array we can both accurately determine source parameters of microearthquakes and make reliable Q estimates while suppressing the trade-off between fc and Q.

  7. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  8. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  9. Searching for continuous gravitational wave sources in binary systems

    NASA Astrophysics Data System (ADS)

    Dhurandhar, Sanjeev V.; Vecchio, Alberto

    2001-06-01

    We consider the problem of searching for continuous gravitational wave (cw) sources orbiting a companion object. This issue is of particular interest because the Low mass x-ray binaries (LMXB's), and among them Sco X-1, the brightest x-ray source in the sky, might be marginally detectable with ~2 y coherent observation time by the Earth-based laser interferometers expected to come on line by 2002 and clearly observable by the second generation of detectors. Moreover, several radio pulsars, which could be deemed to be cw sources, are found to orbit a companion star or planet, and the LIGO-VIRGO-GEO600 network plans to continuously monitor such systems. We estimate the computational costs for a search launched over the additional five parameters describing generic elliptical orbits (up to e<~0.8) using match filtering techniques. These techniques provide the optimal signal-to-noise ratio and also a very clear and transparent theoretical framework. Since matched filtering will be implemented in the final and the most computationally expensive stage of the hierarchical strategies, the theoretical framework provided here can be used to determine the computational costs. In order to disentangle the computational burden involved in the orbital motion of the cw source from the other source parameters (position in the sky and spin down) and reduce the complexity of the analysis, we assume that the source is monochromatic (there is no intrinsic change in its frequency) and its location in the sky is exactly known. The orbital elements, on the other hand, are either assumed to be completely unknown or only partly known. We provide ready-to-use analytical expressions for the number of templates required to carry out the searches in the astrophysically relevant regions of the parameter space and how the computational cost scales with the ranges of the parameters. We also determine the critical accuracy to which a particular parameter must be known, so that no search is needed for it; we provide rigorous statements, based on the geometrical formulation of data analysis, concerning the size of the parameter space so that a particular neutron star is a one-filter target. This result is formulated in a completely general form, independent of the particular kind of source, and can be applied to any class of signals whose waveform can be accurately predicted. We apply our theoretical analysis to Sco X-1 and the 44 neutron stars with binary companions which are listed in the most updated version of the radio pulsar catalog. For up to ~3 h of coherent integration time, Sco X-1 will need at most a few templates; for 1 week integration time the number of templates rapidly rises to ~=5×106. This is due to the rather poor measurements available today of the projected semi-major axis and the orbital phase of the neutron star. If, however, the same search is to be carried out with only a few filters, then more refined measurements of the orbital parameters are called for-an improvement of about three orders of magnitude in the accuracy is required. Further, we show that the five NS's (radio pulsars) for which the upper limits on the signal strength are highest require no more than a few templates each and can be targeted very cheaply in terms of CPU time. Blind searches of the parameter space of orbital elements are, in general, completely un-affordable for present or near future dedicated computational resources, when the coherent integration time is of the order of the orbital period or longer. For wide binary systems, when the observation covers only a fraction of one orbit, the computational burden reduces enormously, and becomes affordable for a significant region of the parameter space.

  10. Research Advances on Radiation Transfer Modeling and Inversion for Multi-scale Land Surface Remote Sensing

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.

    2011-12-01

    As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.

  11. Exploring NASA OMI Level 2 Data With Visualization

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer; Yang, Wenli; Johnson, James; Zhao, Peisheng; Gerasimov, Irina; Pham, Long; Vicente, Gilberto

    2014-01-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme events (such as volcano eruptions, dust storms,... etc.). Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by allowing users to visualize satellite data as "images", with accurate pixel-level (Level-2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting Aura OMI Level-2 Data with GIS-like capabilities. Functionality includes selecting data sources (e.g., multiple parameters under the same scene, like NO2 and SO2, or the same parameter with different aggregation methods, like NO2 in OMNO2G and OMNO2D products), user-defined area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting, reformatting, and reprojection. The system will allow any user-defined portal interface (front-end) to connect to our backend server with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. This back-end service should greatly enhance its expandability to integrate additional outside data/map sources.

  12. Exploring NASA OMI Level 2 Data With Visualization

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer C.; Yang, Wenli; Johnson, James; Zhao, Peisheng; Gerasimov, Irina; Pham, Long; Vincente, Gilbert

    2014-01-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme events (such as volcano eruptions, dust storms, etc.).Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by allowing users to visualize satellite data as images, with accurate pixel-level (Level-2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting Aura OMI Level-2 Data with GIS-like capabilities. Functionality includes selecting data sources (e.g., multiple parameters under the same scene, like NO2 and SO2, or the same parameter with different aggregation methods, like NO2 in OMNO2G and OMNO2D products), user-defined area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting, reformatting, and reprojection. The system will allow any user-defined portal interface (front-end) to connect to our backend server with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. This back-end service should greatly enhance its expandability to integrate additional outside data-map sources.

  13. Microearthquake sequences along the Irpinia normal fault system in Southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Orefice, Antonella; Festa, Gaetano; Alfredo Stabile, Tony; Vassallo, Maurizio; Zollo, Aldo

    2013-04-01

    Microearthquakes reflect a continuous readjustment of tectonic structures, such as faults, under the action of local and regional stress fields. Low magnitude seismicity in the vicinity of active fault zones may reveal insights into the mechanics of the fault systems during the inter-seismic period and shine a light on the role of fluids and other physical parameters in promoting or disfavoring the nucleation of larger size events in the same area. Here we analyzed several earthquake sequences concentrated in very limited regions along the 1980 Irpinia earthquake fault zone (Southern Italy), a complex system characterized by normal stress regime, monitored by the dense, multi-component, high dynamic range seismic network ISNet (Irpinia Seismic Network). On a specific single sequence, the May 2008 Laviano swarm, we performed accurate absolute and relative locations and estimated source parameters and scaling laws that were compared with standard stress-drops computed for the area. Additionally, from EGF deconvolution, we computed a slip model for the mainshock and investigated the space-time evolution of the events in the sequence to reveal possible interactions among earthquakes. Through the massive analysis of cross-correlation based on the master event scanning of the continuous recording, we also reconstructed the catalog of repeated earthquakes and recognized several co-located sequences. For these events, we analyzed the statistical properties, location and source parameters and their space-time evolution with the aim of inferring the processes that control the occurrence and the size of microearthquakes in a swarm.

  14. Towards a Numerical Description of Volcano Aeroacoustic Source Processes using Lattice Boltzmann Strategies

    NASA Astrophysics Data System (ADS)

    Brogi, F.; Malaspinas, O.; Bonadonna, C.; Chopard, B.; Ripepe, M.

    2015-12-01

    Low frequency (< 20Hz) acoustic measurements have a great potential for the real time characterization of volcanic plume source parameters. Using the classical source theory, acoustic data can be related to the exit velocity of the volcanic jet and to mass eruption rate, based on the geometric constrain of the vent and the mixture density. However, the application of the classical acoustic source models to volcanic explosive eruptions has shown to be challenging and a better knowledge of the link between the acoustic radiation and actual volcanic fluid dynamics processes is required. New insights into this subject could be given by the study of realistic aeroacoustic numerical simulations of a volcanic jet. Lattice Boltzmann strategies (LBS) provide the opportunity to develop an accurate, computationally fast, 3D physical model for a volcanic jet. In the field of aeroacoustic applications, dedicated LBS has been proven to have the low dissipative properties needed for capturing the weak acoustic pressure fluctuations. However, due to the big disparity in magnitude between the flow and the acoustic disturbances, even weak spurious noise sources in simulations can ruin the accuracy of the acoustic predictions. Reflected waves from artificial boundaries defined around the flow region can have significant influence on the flow field and overwhelm the acoustic field of interest. In addition, for highly multiscale turbulent flows, such as volcanic plumes, the number of grid points needed to represent the smallest scales might become intractable and the most complicated physics happen only in small portions of the computational domain. The implementation of the grid refinement, in our model allow us to insert local finer grids only where is actually needed and to increase the size of the computational domain for running more realistic simulations. 3D LBS model simulations for turbulent jet aeroacoustics have been accurately validated. Both mean flow and acoustic results are in good agreement with theory and experimental data available in the literature.

  15. Energy dissipation in the blade tip region of an axial fan

    NASA Astrophysics Data System (ADS)

    Bizjan, B.; Milavec, M.; Širok, B.; Trenc, F.; Hočevar, M.

    2016-11-01

    A study of velocity and pressure fluctuations in the tip clearance flow of an axial fan is presented in this paper. Two different rotor blade tip designs were investigated: the standard one with straight blade tips and the modified one with swept-back tip winglets. Comparison of integral sound parameters indicates a significant noise level reduction for the modified blade tip design. To study the underlying mechanisms of the energy conversion and noise generation, a novel experimental method based on simultaneous measurements of local flow velocity and pressure has also been developed and is presented here. The method is based on the phase space analysis by the use of attractors, which enable more accurate identification and determination of the local flow structures and turbulent flow properties. Specific gap flow energy derived from the pressure and velocity time series was introduced as an additional attractor parameter to assess the flow energy distribution and dissipation within the phase space, and thus determines characteristic sources of the fan acoustic emission. The attractors reveal a more efficient conversion of the pressure to kinetic flow energy in the case of the modified (tip winglet) fan blade design, and also a reduction in emitted noise levels. The findings of the attractor analysis are in a good agreement with integral fan characteristics (efficiency and noise level), while offering a much more accurate and detailed representation of gap flow phenomena.

  16. Experimental Measurement of the Static Coefficient of Friction at the Ti-Ti Taper Connection in Total Hip Arthroplasty.

    PubMed

    Bitter, T; Khan, I; Marriott, T; Schreurs, B W; Verdonschot, N; Janssen, D

    2016-03-01

    The modular taper junction in total hip replacements has been implicated as a possible source of wear. The finite-element (FE) method can be used to study the wear potential at the taper junction. For such simulations it is important to implement representative contact parameters, in order to achieve accurate results. One of the main parameters in FE simulations is the coefficient of friction. However, in current literature, there is quite a wide spread in coefficient of friction values (0.15 - 0.8), which has a significant effect on the outcome of the FE simulations. Therefore, to obtain more accurate results, one should use a coefficient of friction that is determined for the specific material couple being analyzed. In this study, the static coefficient of friction was determined for two types of titanium-on-titanium stem-adaptor couples, using actual cut-outs of the final implants, to ensure that the coefficient of friction was determined consistently for the actual implant material and surface finish characteristics. Two types of tapers were examined, Biomet type-1 and 12/14, where type-1 has a polished surface finish and the 12/14 is a microgrooved system. We found static coefficients of friction of 0.19 and 0.29 for the 12/14 and type-1 stem-adaptor couples, respectively.

  17. Vfold: a web server for RNA structure and folding thermodynamics prediction.

    PubMed

    Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie

    2014-01-01

    The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".

  18. Transient analysis of intercalation electrodes for parameter estimation

    NASA Astrophysics Data System (ADS)

    Devan, Sheba

    An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform (FFT) to generate impedance spectra to derive immediate qualitative information regarding the nature of the system. The short time analysis technique gives the ability to perform both time domain and frequency domain analysis using data measured within short durations.

  19. Measurement of erosion in helicon plasma thrusters using the VASIMR® VX-CR device

    NASA Astrophysics Data System (ADS)

    Del Valle Gamboa, Juan Ignacio; Castro-Nieto, Jose; Squire, Jared; Carter, Mark; Chang-Diaz, Franklin

    2015-09-01

    The helicon plasma source is one of the principal stages of the high-power VASIMR® electric propulsion system. The VASIMR® VX-CR experiment focuses solely on this stage, exploring the erosion and long-term operation effects of the VASIMR helicon source. We report on the design and operational parameters of the VX-CR experiment, and the development of modeling tools and characterization techniques allowing the study of erosion phenomena in helicon plasma sources in general, and stand-alone helicon plasma thrusters (HPTs) in particular. A thorough understanding of the erosion phenomena within HPTs will enable better predictions of their behavior as well as more accurate estimations of their expected lifetime. We present a simplified model of the plasma-wall interactions within HPTs based on current models of the plasma density distributions in helicon discharges. Results from this modeling tool are used to predict the erosion within the plasma-facing components of the VX-CR device. Experimental techniques to measure actual erosion, including the use of coordinate-measuring machines and microscopy, will be discussed.

  20. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  1. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  3. Characteristic Analysis of Air-gun Source Wavelet based on the Vertical Cable Data

    NASA Astrophysics Data System (ADS)

    Xing, L.

    2016-12-01

    Air guns are important sources for marine seismic exploration. Far-field wavelets of air gun arrays, as a necessary parameter for pre-stack processing and source models, plays an important role during marine seismic data processing and interpretation. When an air gun fires, it generates a series of air bubbles. Similar to onshore seismic exploration, the water forms a plastic fluid near the bubble; the farther the air gun is located from the measurement, the more steady and more accurately represented the wavelet will be. In practice, hydrophones should be placed more than 100 m from the air gun; however, traditional seismic cables cannot meet this requirement. On the other hand, vertical cables provide a viable solution to this problem. This study uses a vertical cable to receive wavelets from 38 air guns and data are collected offshore Southeast Qiong, where the water depth is over 1000 m. In this study, the wavelets measured using this technique coincide very well with the simulated wavelets and can therefore represent the real shape of the wavelets. This experiment fills a technology gap in China.

  4. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  5. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  6. Digital breast tomosynthesis geometry calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xinying; Mainprize, James G.; Kempston, Michael P.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2007-03-01

    Digital Breast Tomosynthesis (DBT) is a 3D x-ray technique for imaging the breast. The x-ray tube, mounted on a gantry, moves in an arc over a limited angular range around the breast while 7-15 images are acquired over a period of a few seconds. A reconstruction algorithm is used to create a 3D volume dataset from the projection images. This procedure reduces the effects of tissue superposition, often responsible for degrading the quality of projection mammograms. This may help improve sensitivity of cancer detection, while reducing the number of false positive results. For DBT, images are acquired at a set of gantry rotation angles. The image reconstruction process requires several geometrical factors associated with image acquisition to be known accurately, however, vibration, encoder inaccuracy, the effects of gravity on the gantry arm and manufacturing tolerances can produce deviations from the desired acquisition geometry. Unlike cone-beam CT, in which a complete dataset is acquired (500+ projections over 180°), tomosynthesis reconstruction is challenging in that the angular range is narrow (typically from 20°-45°) and there are fewer projection images (~7-15). With such a limited dataset, reconstruction is very sensitive to geometric alignment. Uncertainties in factors such as detector tilt, gantry angle, focal spot location, source-detector distance and source-pivot distance can produce several artifacts in the reconstructed volume. To accurately and efficiently calculate the location and angles of orientation of critical components of the system in DBT geometry, a suitable phantom is required. We have designed a calibration phantom for tomosynthesis and developed software for accurate measurement of the geometric parameters of a DBT system. These have been tested both by simulation and experiment. We will present estimates of the precision available with this technique for a prototype DBT system.

  7. Study the effects of varying interference upon the optical properties of turbid samples using NIR spatial light modulation

    NASA Astrophysics Data System (ADS)

    Shaul, Oren; Fanrazi-Kahana, Michal; Meitav, Omri; Pinhasi, Gad A.; Abookasis, David

    2018-03-01

    Optical properties of biological tissues are valuable diagnostic parameters which can provide necessary information regarding tissue state during disease pathogenesis and therapy. However, different sources of interference, such as temperature changes may modify these properties, introducing confounding factors and artifacts to data, consequently skewing their interpretation and misinforming clinical decision-making. In the current study, we apply spatial light modulation, a type of diffuse reflectance hyperspectral imaging technique, to monitor the variation in optical properties of highly scattering turbid media in the presence varying levels of the following sources of interference: scattering concentration, temperature, and pressure. Spatial near-infrared (NIR) light modulation is a wide-field, non-contact emerging optical imaging platform capable of separating the effects of tissue scattering from those of absorption, thereby accurately estimating both parameters. With this technique, periodic NIR illumination patterns at alternately low and high spatial frequencies, at six discrete wavelengths between 690 to 970 nm, were sequentially projected upon the medium while a CCD camera collects the diffusely reflected light. Data analysis based assumptions is then performed off-line to recover the medium's optical properties. We conducted a series of experiments demonstrating the changes in absorption and reduced scattering coefficients of commercially available fresh milk and chicken breast tissue under different interference conditions. In addition, information on the refractive index was study under increased pressure. This work demonstrates the utility of NIR spatial light modulation to detect varying sources of interference upon the optical properties of biological samples.

  8. Extensions to the integral line-beam method for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.

    1995-08-01

    A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less

  9. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  10. Optimizing detection and analysis of slow waves in sleep EEG.

    PubMed

    Mensen, Armand; Riedner, Brady; Tononi, Giulio

    2016-12-01

    Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Dynamic Modelling under Uncertainty: The Case of Trypanosoma brucei Energy Metabolism

    PubMed Central

    Achcar, Fiona; Kerkhoven, Eduard J.; Bakker, Barbara M.; Barrett, Michael P.; Breitling, Rainer

    2012-01-01

    Kinetic models of metabolism require detailed knowledge of kinetic parameters. However, due to measurement errors or lack of data this knowledge is often uncertain. The model of glycolysis in the parasitic protozoan Trypanosoma brucei is a particularly well analysed example of a quantitative metabolic model, but so far it has been studied with a fixed set of parameters only. Here we evaluate the effect of parameter uncertainty. In order to define probability distributions for each parameter, information about the experimental sources and confidence intervals for all parameters were collected. We created a wiki-based website dedicated to the detailed documentation of this information: the SilicoTryp wiki (http://silicotryp.ibls.gla.ac.uk/wiki/Glycolysis). Using information collected in the wiki, we then assigned probability distributions to all parameters of the model. This allowed us to sample sets of alternative models, accurately representing our degree of uncertainty. Some properties of the model, such as the repartition of the glycolytic flux between the glycerol and pyruvate producing branches, are robust to these uncertainties. However, our analysis also allowed us to identify fragilities of the model leading to the accumulation of 3-phosphoglycerate and/or pyruvate. The analysis of the control coefficients revealed the importance of taking into account the uncertainties about the parameters, as the ranking of the reactions can be greatly affected. This work will now form the basis for a comprehensive Bayesian analysis and extension of the model considering alternative topologies. PMID:22379410

  12. PULSED ION SOURCE

    DOEpatents

    Martina, E.F.

    1958-10-14

    An improved pulsed ion source of the type where the gas to be ionized is released within the source by momentary heating of an electrode occluded with the gas is presented. The other details of the ion source construction include an electron emitting filament and a positive reference grid, between which an electron discharge is set up, and electrode means for withdrawing the ions from the source. Due to the location of the gas source behind the electrode discharge region, and the positioning of the vacuum exhaust system on the opposite side of the discharge, the released gas is drawn into the electron discharge and ionized in accurately controlled amounts. Consequently, the output pulses of the ion source may be accurately controlled.

  13. An In-Depth Cost Analysis for New Light-Duty Vehicle ...

    EPA Pesticide Factsheets

    Within the transportation sector, light-duty vehicles are the predominant source of greenhouse gas (GHG) emissions, principally exhaust CO2 and refrigerant leakage from vehicle air conditioners. EPA has contracted with FEV to estimate the costs of technologies that may be employed to reduce these emissions. The purpose of this work is to determine accurate costs for GHG-reducing technologies. This is of paramount importance in setting the appropriate GHG standards. EPA has contracted with FEV to perform this cost analysis through tearing down vehicles, engines and components, both with and without these technologies, and evaluating, part by part, the observed differences in size, weight, materials, machining steps, and other cost-affecting parameters.

  14. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  15. Menstruation in adolescents: what do we know? And what do we do with the information?

    PubMed

    Adams Hillard, Paula J

    2014-12-01

    The menstrual cycle has been recognized as a vital sign that gives information about the overall health of an adolescent or young adult female. Significant deviations from monthly cycles can signal disease or dysfunction. This review highlights the evidence based parameters for normal puberty, menarche, cyclicity, and amount of bleeding. The review addresses sources of information available online, noting inaccuracies that appear in web sites, even and especially those targeting adolescents. The review includes a call to action to provide accurate information about the menstrual cycle as a VITAL SIGN. Copyright © 2014 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  16. Fan broadband interaction noise modeling using a low-order method

    NASA Astrophysics Data System (ADS)

    Grace, S. M.

    2015-06-01

    A low-order method for simulating broadband interaction noise downstream of the fan stage in a turbofan engine is explored in this paper. The particular noise source of interest is due to the interaction of the fan rotor wake with the fan exit guide vanes (FEGVs). The vanes are modeled as flat plates and the method utilizes strip theory relying on unsteady aerodynamic cascade theory at each strip. This paper shows predictions for 6 of the 9 cases from NASA's Source Diagnostic Test (SDT) and all 4 cases from the 2014 Fan Broadband Workshop Fundamental Case 2 (FC2). The turbulence in the rotor wake is taken from hot-wire data for the low speed SDT cases and the FC2 cases. Additionally, four different computational simulations of the rotor wake flow for all of the SDT rotor speeds have been used to determine the rotor wake turbulence parameters. Comparisons between predictions based on the different inputs highlight the possibility of a potential effect present in the hot-wire data for the SDT as well as the importance of accurately describing the turbulence length scale when using this model. The method produces accurate predictions of the spectral shape for all of the cases. It also predicts reasonably well all of the trends that can be considered based on the included cases such as vane geometry, vane count, turbulence level, and rotor speed.

  17. Modeling of surface dust concentration in snow cover at industrial area using neural networks and kriging

    NASA Astrophysics Data System (ADS)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Shichkin, A. V.; Tyagunov, A. G.; Medvedev, A. N.

    2017-06-01

    Modeling of spatial distribution of pollutants in the urbanized territories is difficult, especially if there are multiple emission sources. When monitoring such territories, it is often impossible to arrange the necessary detailed sampling. Because of this, the usual methods of analysis and forecasting based on geostatistics are often less effective. Approaches based on artificial neural networks (ANNs) demonstrate the best results under these circumstances. This study compares two models based on ANNs, which are multilayer perceptron (MLP) and generalized regression neural networks (GRNNs) with the base geostatistical method - kriging. Models of the spatial dust distribution in the snow cover around the existing copper quarry and in the area of emissions of a nickel factory were created. To assess the effectiveness of the models three indices were used: the mean absolute error (MAE), the root-mean-square error (RMSE), and the relative root-mean-square error (RRMSE). Taking into account all indices the model of GRNN proved to be the most accurate which included coordinates of the sampling points and the distance to the likely emission source as input parameters for the modeling. Maps of spatial dust distribution in the snow cover were created in the study area. It has been shown that the models based on ANNs were more accurate than the kriging, particularly in the context of a limited data set.

  18. Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin

    The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less

  19. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. Utilization of GPS Tropospheric Delays for Climate Research

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan

    2017-05-01

    The tropospheric delay is one of the main error sources in Global Positioning Systems (GPS) and its impact plays a crucial role in near real-time weather forecasting. Accessibility and accurate estimation of this parameter are essential for weather and climate research. Advances in GPS application has allowed the measurements of zenith tropospheric delay (ZTD) in all weather conditions and on a global scale with fine temporal and spatial resolution. In addition to the rapid advancement of GPS technology and informatics and the development of research in the field of Earth and Planetary Sciences, the GPS data has been available free of charge. Now only required sophisticated processing techniques but user friendly. On the other hand, the ZTD parameter obtained from the models or measurements needs to be converted into precipitable water vapor (PWV) to make it more useful as a component of weather forecasting and analysis atmospheric hazards such as tropical storms, flash floods, landslide, pollution, and earthquake as well as for climate change studies. This paper addresses the determination of ZTD as a signal error or delay source during the propagation from the satellite to a receiver on the ground and is a key driving force behind the atmospheric events. Some results in terms of ZTD and PWV will be highlighted in this paper.

  1. Nanoscale MOS devices: device parameter fluctuations and low-frequency noise (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Wong, Hei; Iwai, Hiroshi; Liou, J. J.

    2005-05-01

    It is well-known in conventional MOS transistors that the low-frequency noise or flicker noise is mainly contributed by the trapping-detrapping events in the gate oxide and the mobility fluctuation in the surface channel. In nanoscale MOS transistors, the number of trapping-detrapping events becomes less important because of the large direct tunneling current through the ultrathin gate dielectric which reduces the probability of trapping-detrapping and the level of leakage current fluctuation. Other noise sources become more significant in nanoscale devices. The source and drain resistance noises have greater impact on the drain current noise. Significant contribution of the parasitic bipolar transistor noise in ultra-short channel and channel mobility fluctuation to the channel noise are observed. The channel mobility fluctuation in nanoscale devices could be due to the local composition fluctuation of the gate dielectric material which gives rise to the permittivity fluctuation along the channel and results in gigantic channel potential fluctuation. On the other hand, the statistical variations of the device parameters across the wafer would cause the noise measurements less accurate which will be a challenge for the applicability of analytical flicker noise model as a process or device evaluation tool for nanoscale devices. Some measures for circumventing these difficulties are proposed.

  2. Studies of HZE particle interactions and transport for space radiation protection purposes

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Schimmerling, Walter; Wong, Mervyn

    1987-01-01

    The main emphasis is on developing general methods for accurately predicting high-energy heavy ion (HZE) particle interactions and transport for use by researchers in mission planning studies, in evaluating astronaut self-shielding factors, and in spacecraft shield design and optimization studies. The two research tasks are: (1) to develop computationally fast and accurate solutions to the Boltzmann (transport) equation; and (2) to develop accurate HZE interaction models, from fundamental physical considerations, for use as inputs into these transport codes. Accurate solutions to the HZE transport problem have been formulated through a combination of analytical and numerical techniques. In addition, theoretical models for the input interaction parameters are under development: stopping powers, nuclear absorption cross sections, and fragmentation parameters.

  3. Use of Numerical Groundwater Model and Analytical Empirical Orthogonal Function for Calibrating Spatiotemporal pattern of Pumpage, Recharge and Parameter

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.

    2016-12-01

    This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.

  4. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  5. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  6. The circuit parameters measurement of the SABALAN-I plasma focus facility and comparison with Lee Model

    NASA Astrophysics Data System (ADS)

    Karimi, F. S.; Saviz, S.; Ghoranneviss, M.; Salem, M. K.; Aghamir, F. M.

    The circuit parameters are investigated in a Mather-type plasma focus device. The experiments are performed in the SABALAN-I plasma focus facility (2 kJ, 20 kV, 10 μF). A 12-turn Rogowski coil is built and used to measure the time derivative of discharge current (dI/dt). The high pressure test has been performed in this work, as alternative technique to short circuit test to determine the machine circuit parameters and calibration factor of the Rogowski coil. The operating parameters are calculated by two methods and the results show that the relative error of determined parameters by method I, are very low in comparison to method II. Thus the method I produces more accurate results than method II. The high pressure test is operated with this assumption that no plasma motion and the circuit parameters may be estimated using R-L-C theory given that C0 is known. However, for a plasma focus, even at highest permissible pressure it is found that there is significant motion, so that estimated circuit parameters not accurate. So the Lee Model code is used in short circuit mode to generate the computed current trace for fitting to the current waveform was integrated from current derivative signal taken with Rogowski coil. Hence, the dynamics of plasma is accounted for into the estimation and the static bank parameters are determined accurately.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newsom, R. K.; Sivaraman, C.; Shippert, T. R.

    Wind speed and direction, together with pressure, temperature, and relative humidity, are the most fundamental atmospheric state parameters. Accurate measurement of these parameters is crucial for numerical weather prediction. Vertically resolved wind measurements in the atmospheric boundary layer are particularly important for modeling pollutant and aerosol transport. Raw data from a scanning coherent Doppler lidar system can be processed to generate accurate height-resolved measurements of wind speed and direction in the atmospheric boundary layer.

  8. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  9. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yidong Xia; Mitch Plummer; Robert Podgorney

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less

  10. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    NASA Astrophysics Data System (ADS)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  11. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  12. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  13. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  14. Numerical framework for the modeling of electrokinetic flows

    NASA Astrophysics Data System (ADS)

    Deshpande, Manish; Ghaddar, Chahid; Gilbert, John R.; St. John, Pamela M.; Woudenberg, Timothy M.; Connell, Charles R.; Molho, Joshua; Herr, Amy; Mungal, Godfrey; Kenny, Thomas W.

    1998-09-01

    This paper presents a numerical framework for design-based analyses of electrokinetic flow in interconnects. Electrokinetic effects, which can be broadly divided into electrophoresis and electroosmosis, are of importance in providing a transport mechanism in microfluidic devices for both pumping and separation. Models for the electrokinetic effects can be derived and coupled to the fluid dynamic equations through appropriate source terms. In the design of practical microdevices, however, accurate coupling of the electrokinetic effects requires the knowledge of several material and physical parameters, such as the diffusivity and the mobility of the solute in the solvent. Additionally wall-based effects such as chemical binding sites might exist that affect the flow patterns. In this paper, we address some of these issues by describing a synergistic numerical/experimental process to extract the parameters required. Experiments were conducted to provide the numerical simulations with a mechanism to extract these parameters based on quantitative comparisons with each other. These parameters were then applied in predicting further experiments to validate the process. As part of this research, we have created NetFlow, a tool for micro-fluid analyses. The tool can be validated and applied in existing technologies by first creating test structures to extract representations of the physical phenomena in the device, and then applying them in the design analyses to predict correct behavior.

  15. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.

  16. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can be confidently evaluated using the accurately calibrated resist model. One of the examples simulates the sensitivity of the mask pattern error, which is helpful to specify the mask CD control.

  17. Neural network feedforward control of a closed-circuit wind tunnel

    NASA Astrophysics Data System (ADS)

    Sutcliffe, Peter

    Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.

  18. The determination of operational and support requirements and costs during the conceptual design of space systems

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles; Beasley, Kenneth D.

    1992-01-01

    The first year of research to provide NASA support in predicting operational and support parameters and costs of proposed space systems is reported. Some of the specific research objectives were (1) to develop a methodology for deriving reliability and maintainability parameters and, based upon their estimates, determine the operational capability and support costs, and (2) to identify data sources and establish an initial data base to implement the methodology. Implementation of the methodology is accomplished through the development of a comprehensive computer model. While the model appears to work reasonably well when applied to aircraft systems, it was not accurate when used for space systems. The model is dynamic and should be updated as new data become available. It is particularly important to integrate the current aircraft data base with data obtained from the Space Shuttle and other space systems since subsystems unique to a space vehicle require data not available from aircraft. This research only addressed the major subsystems on the vehicle.

  19. Solving the relativistic inverse stellar problem through gravitational waves observation of binary neutron stars

    NASA Astrophysics Data System (ADS)

    Abdelsalhin, Tiziano; Maselli, Andrea; Ferrari, Valeria

    2018-04-01

    The LIGO/Virgo Collaboration has recently announced the direct detection of gravitational waves emitted in the coalescence of a neutron star binary. This discovery allows, for the first time, to set new constraints on the behavior of matter at supranuclear density, complementary with those coming from astrophysical observations in the electromagnetic band. In this paper we demonstrate the feasibility of using gravitational signals to solve the relativistic inverse stellar problem, i.e., to reconstruct the parameters of the equation of state (EoS) from measurements of the stellar mass and tidal Love number. We perform Bayesian inference of mock data, based on different models of the star internal composition, modeled through piecewise polytropes. Our analysis shows that the detection of a small number of sources by a network of advanced interferometers would allow to put accurate bounds on the EoS parameters, and to perform a model selection among the realistic equations of state proposed in the literature.

  20. An information propagation model considering incomplete reading behavior in microblog

    NASA Astrophysics Data System (ADS)

    Su, Qiang; Huang, Jiajia; Zhao, Xiande

    2015-02-01

    Microblog is one of the most popular communication channels on the Internet, and has already become the third largest source of news and public opinions in China. Although researchers have studied the information propagation in microblog using the epidemic models, previous studies have not considered the incomplete reading behavior among microblog users. Therefore, the model cannot fit the real situations well. In this paper, we proposed an improved model entitled Microblog-Susceptible-Infected-Removed (Mb-SIR) for information propagation by explicitly considering the user's incomplete reading behavior. We also tested the effectiveness of the model using real data from Sina Microblog. We demonstrate that the new proposed model is more accurate in describing the information propagation in microblog. In addition, we also investigate the effects of the critical model parameters, e.g., reading rate, spreading rate, and removed rate through numerical simulations. The simulation results show that, compared with other parameters, reading rate plays the most influential role in the information propagation performance in microblog.

  1. Determination of Galactic Aberration from VLBI Measurements and Its Effect on VLBI Reference Frames and Earth Orientation Parameters.

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.

    2014-12-01

    Galactic aberration is due to the motion of the solar system barycenter around the galactic center. It results in a systematic pattern of apparent proper motion of radio sources observed by VLBI. This effect is not currently included in VLBI analysis. Estimates of the size of this effect indicate that it is important that this secular aberration drift be accounted for in order to maintain an accurate celestial reference frame and allow astrometry at the several microarcsecond level. Future geodetic observing systems are being designed to be capable of producing a future terrestrial reference frame with an accuracy of 1 mm and stability of 0.1 mm/year. We evaluate the effect galactic aberration on attaining these reference frame goals. This presentation will discuss 1) the estimation of galactic aberration from VLBI data and 2) the effect of aberration on the Terrestrial and Celestial Reference Frames and the Earth Orientation Parameters that connect these frames.

  2. Markets, Herding and Response to External Information.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2015-01-01

    We focus on the influence of external sources of information upon financial markets. In particular, we develop a stochastic agent-based market model characterized by a certain herding behavior as well as allowing traders to be influenced by an external dynamic signal of information. This signal can be interpreted as a time-varying advertising, public perception or rumor, in favor or against one of two possible trading behaviors, thus breaking the symmetry of the system and acting as a continuously varying exogenous shock. As an illustration, we use a well-known German Indicator of Economic Sentiment as information input and compare our results with Germany's leading stock market index, the DAX, in order to calibrate some of the model parameters. We study the conditions for the ensemble of agents to more accurately follow the information input signal. The response of the system to the external information is maximal for an intermediate range of values of a market parameter, suggesting the existence of three different market regimes: amplification, precise assimilation and undervaluation of incoming information.

  3. A compact model and direct parameters extraction techniques For amorphous gallium-indium-zinc-oxide thin film transistors

    NASA Astrophysics Data System (ADS)

    Moldovan, Oana; Castro-Carranza, Alejandra; Cerdeira, Antonio; Estrada, Magali; Barquinha, Pedro; Martins, Rodrigo; Fortunato, Elvira; Miljakovic, Slobodan; Iñiguez, Benjamin

    2016-12-01

    An advanced compact and analytical drain current model for the amorphous gallium indium zinc oxide (GIZO) thin film transistors (TFTs) is proposed. Its output saturation behavior is improved by introducing a new asymptotic function. All model parameters were extracted using an adapted version of the Universal Method and Extraction Procedure (UMEM) applied for the first time for GIZO devices in a simple and direct form. We demonstrate the correct behavior of the model for negative VDS, a necessity for a complete compact model. In this way we prove the symmetry of source and drain electrodes and extend the range of applications to both signs of VDS. The model, in Verilog-A code, is implemented in Electronic Design Automation (EDA) tools, such as Smart Spice, and compared with measurements of TFTs. It describes accurately the experimental characteristics in the whole range of GIZO TFTs operation, making the model suitable for the design of circuits using these types of devices.

  4. Simulation study and guidelines to generate Laser-induced Surface Acoustic Waves for human skin feature detection

    NASA Astrophysics Data System (ADS)

    Li, Tingting; Fu, Xing; Chen, Kun; Dorantes-Gonzalez, Dante J.; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2015-12-01

    Despite the seriously increasing number of people contracting skin cancer every year, limited attention has been given to the investigation of human skin tissues. To this regard, Laser-induced Surface Acoustic Wave (LSAW) technology, with its accurate, non-invasive and rapid testing characteristics, has recently shown promising results in biological and biomedical tissues. In order to improve the measurement accuracy and efficiency of detecting important features in highly opaque and soft surfaces such as human skin, this paper identifies the most important parameters of a pulse laser source, as well as provides practical guidelines to recommended proper ranges to generate Surface Acoustic Waves (SAWs) for characterization purposes. Considering that melanoma is a serious type of skin cancer, we conducted a finite element simulation-based research on the generation and propagation of surface waves in human skin containing a melanoma-like feature, determine best pulse laser parameter ranges of variation, simulation mesh size and time step, working bandwidth, and minimal size of detectable melanoma.

  5. Gain determination of optical active doped planar waveguides

    NASA Astrophysics Data System (ADS)

    Šmejcký, J.; Jeřábek, V.; Nekvindová, P.

    2017-12-01

    This paper summarizes the results of the gain transmission characteristics measurement carried out on the new ion exchange Ag+ - Na+ optical Er3+ and Yb3+ doped active planar waveguides realized on a silica based glass substrates. The results were used for optimization of the precursor concentration in the glass substrates. The gain measurements were performed by the time domain method using a pulse generator, as well as broadband measurement method using supercontinuum optical source in the wavelength domain. Both methods were compared and the results were graphically processed. It has been confirmed that pulse method is useful as it provides a very accurate measurement of the gain - pumping power characteristics for one wavelength. In the case of radiation spectral characteristics, our measurement exactly determined the maximum gain wavelength bandwidth of the active waveguide. The spectral characteristics of the pumped and unpumped waveguides were compared. The gain parameters of the reported silica-based glasses can be compared with the phosphate-based parameters, typically used for optical active devices application.

  6. SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R E; Mayeda, K; Walter, W R

    2008-07-08

    The objectives of this study are to improve low-magnitude (concentrating on M2.5-5) regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge at small magnitudes (i.e., m{sub b}more » < {approx} 4.0) is poorly resolved, and source scaling remains a subject of on-going debate in the earthquake seismology community. Recently there have been a number of empirical studies suggesting scaling of micro-earthquakes is non-self-similar, yet there are an equal number of compelling studies that would suggest otherwise. It is not clear whether different studies obtain different results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods that both make use of empirical Green's function (EGF) earthquakes to remove path effects. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But finding well recorded earthquakes with 'perfect' EGF events for direct wave analysis is difficult, limits the number of earthquakes that can be studied. We begin with closely-located, well-correlated earthquakes. We use a multi-taper method to obtain time-domain source-time-functions by frequency division. We only accept an earthquake and EGF pair if they are able to produce a clear, time-domain source pulse. We fit the spectral ratios and perform a grid-search about the preferred parameters to ensure the fits are well constrained. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We analyze three clusters of aftershocks from the well-recorded sequence following the M5 Au Sable Forks, NY, earthquake to obtain some of the first accurate source parameters for small earthquakes in eastern North America. Each cluster contains a M{approx}2, and two contain M{approx}3, as well as smaller aftershocks. We find that the corner frequencies and stress drops are high (averaging 100 MPa) confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We also demonstrate that a scaling breakdown suggested by earlier work is simply an artifact of their more band-limited data. We calculate radiated energy, and find that the ratio of Energy to seismic Moment is also high, around 10{sup -4}. We estimate source parameters for the M5 mainshock using similar methods, but our results are more doubtful because we do not have a EGF event that meets our preferred criteria. The stress drop and energy/moment ratio for the mainshock are slightly higher than for the aftershocks. Our improved, and simplified coda wave analysis method uses spectral ratios (as for the direct waves) but relies on the averaging nature of the coda waves to use EGF events that do not meet the strict criteria of similarity required for the direct wave analysis. We have applied the coda wave spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks, and also to several sequences in Italy with M{approx}6 mainshocks. The Italian earthquakes have higher stress drops than the Hector Mine sequence, but lower than Au Sable Forks. These results show a departure from self-similarity, consistent with previous studies using similar regional datasets. The larger earthquakes have higher stress drops and energy/moment ratios. We perform a preliminary comparison of the two methods using the M5 Au Sable Forks earthquake. Both methods give very consistent results, and we are applying the comparison to further events.« less

  7. Modeling runoff and erosion risk in a~small steep cultivated watershed using different data sources: from on-site measurements to farmers' perceptions

    NASA Astrophysics Data System (ADS)

    Auvet, B.; Lidon, B.; Kartiwa, B.; Le Bissonnais, Y.; Poussin, J.-C.

    2015-09-01

    This paper presents an approach to model runoff and erosion risk in a context of data scarcity, whereas the majority of available models require large quantities of physical data that are frequently not accessible. To overcome this problem, our approach uses different sources of data, particularly on agricultural practices (tillage and land cover) and farmers' perceptions of runoff and erosion. The model was developed on a small (5 ha) cultivated watershed characterized by extreme conditions (slopes of up to 55 %, extreme rainfall events) on the Merapi volcano in Indonesia. Runoff was modelled using two versions of STREAM. First, a lumped version was used to determine the global parameters of the watershed. Second, a distributed version used three parameters for the production of runoff (slope, land cover and roughness), a precise DEM, and the position of waterways for runoff distribution. This information was derived from field observations and interviews with farmers. Both surface runoff models accurately reproduced runoff at the outlet. However, the distributed model (Nash-Sutcliffe = 0.94) was more accurate than the adjusted lumped model (N-S = 0.85), especially for the smallest and biggest runoff events, and produced accurate spatial distribution of runoff production and concentration. Different types of erosion processes (landslides, linear inter-ridge erosion, linear erosion in main waterways) were modelled as a combination of a hazard map (the spatial distribution of runoff/infiltration volume provided by the distributed model), and a susceptibility map combining slope, land cover and tillage, derived from in situ observations and interviews with farmers. Each erosion risk map gives a spatial representation of the different erosion processes including risk intensities and frequencies that were validated by the farmers and by in situ observations. Maps of erosion risk confirmed the impact of the concentration of runoff, the high susceptibility of long steep slopes, and revealed the critical role of tillage direction. Calibrating and validating models using in situ measurements, observations and farmers' perceptions made it possible to represent runoff and erosion risk despite the initial scarcity of hydrological data. Even if the models mainly provided orders of magnitude and qualitative information, they significantly improved our understanding of the watershed dynamics. In addition, the information produced by such models is easy for farmers to use to manage runoff and erosion by using appropriate agricultural practices.

  8. MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Wu, D; Rutel, I

    2015-06-15

    Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less

  9. Influence of the volume and density functions within geometric models for estimating trunk inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A

    2010-02-01

    The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).

  10. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  11. Multivariate analysis of ATR-FTIR spectra for assessment of oil shale organic geochemical properties

    USGS Publications Warehouse

    Washburn, Kathryn E.; Birdwell, Justin E.

    2013-01-01

    In this study, attenuated total reflectance (ATR) Fourier transform infrared spectroscopy (FTIR) was coupled with partial least squares regression (PLSR) analysis to relate spectral data to parameters from total organic carbon (TOC) analysis and programmed pyrolysis to assess the feasibility of developing predictive models to estimate important organic geochemical parameters. The advantage of ATR-FTIR over traditional analytical methods is that source rocks can be analyzed in the laboratory or field in seconds, facilitating more rapid and thorough screening than would be possible using other tools. ATR-FTIR spectra, TOC concentrations and Rock–Eval parameters were measured for a set of oil shales from deposits around the world and several pyrolyzed oil shale samples. PLSR models were developed to predict the measured geochemical parameters from infrared spectra. Application of the resulting models to a set of test spectra excluded from the training set generated accurate predictions of TOC and most Rock–Eval parameters. The critical region of the infrared spectrum for assessing S1, S2, Hydrogen Index and TOC consisted of aliphatic organic moieties (2800–3000 cm−1) and the models generated a better correlation with measured values of TOC and S2 than did integrated aliphatic peak areas. The results suggest that combining ATR-FTIR with PLSR is a reliable approach for estimating useful geochemical parameters of oil shales that is faster and requires less sample preparation than current screening methods.

  12. A Large-Scale, High-Resolution Hydrological Model Parameter Data Set for Climate Change Impact Assessment for the Conterminous US

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim

    2014-01-01

    To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less

  13. s -wave scattering length of a Gaussian potential

    NASA Astrophysics Data System (ADS)

    Jeszenszki, Peter; Cherny, Alexander Yu.; Brand, Joachim

    2018-04-01

    We provide accurate expressions for the s -wave scattering length for a Gaussian potential well in one, two, and three spatial dimensions. The Gaussian potential is widely used as a pseudopotential in the theoretical description of ultracold-atomic gases, where the s -wave scattering length is a physically relevant parameter. We first describe a numerical procedure to compute the value of the s -wave scattering length from the parameters of the Gaussian, but find that its accuracy is limited in the vicinity of singularities that result from the formation of new bound states. We then derive simple analytical expressions that capture the correct asymptotic behavior of the s -wave scattering length near the bound states. Expressions that are increasingly accurate in wide parameter regimes are found by a hierarchy of approximations that capture an increasing number of bound states. The small number of numerical coefficients that enter these expressions is determined from accurate numerical calculations. The approximate formulas combine the advantages of the numerical and approximate expressions, yielding an accurate and simple description from the weakly to the strongly interacting limit.

  14. Comparison of seismic waveform inversion results for the rupture history of a finite fault: application to the 1986 North Palm Springs, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.

    1989-01-01

    The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author

  15. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less

  16. Estimates of Power Plant NOx Emissions and Lifetimes from OMI NO2 Satellite Retrievals

    NASA Technical Reports Server (NTRS)

    de Foy, Benjamin; Lu, Zifeng; Streets, David G.; Lamsal, Lok N.; Duncan, Bryan N.

    2015-01-01

    Isolated power plants with well characterized emissions serve as an ideal test case of methods to estimate emissions using satellite data. In this study we evaluate the Exponentially-Modified Gaussian (EMG) method and the box model method based on mass balance for estimating known NOx emissions from satellite retrievals made by the Ozone Monitoring Instrument (OMI). We consider 29 power plants in the USA which have large NOx plumes that do not overlap with other sources and which have emissions data from the Continuous Emission Monitoring System (CEMS). This enables us to identify constraints required by the methods, such as which wind data to use and how to calculate background values. We found that the lifetimes estimated by the methods are too short to be representative of the chemical lifetime. Instead, we introduce a separate lifetime parameter to account for the discrepancy between estimates using real data and those that theory would predict. In terms of emissions, the EMG method required averages from multiple years to give accurate results, whereas the box model method gave accurate results for individual ozone seasons.

  17. Real-Time Three-Dimensional Cell Segmentation in Large-Scale Microscopy Data of Developing Embryos.

    PubMed

    Stegmaier, Johannes; Amat, Fernando; Lemon, William C; McDole, Katie; Wan, Yinan; Teodoro, George; Mikut, Ralf; Keller, Philipp J

    2016-01-25

    We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Cometary spliting - a source for the Jupiter family?

    NASA Astrophysics Data System (ADS)

    Pittich, E. M.; Rickman, H.

    1994-01-01

    The quest for the origin of the Jupiter family of comets includes investigating the possibility that a large fraction of this population originates from past splitting events. In particular, one suggested scenario, albeit less attractive on physical grounds, maintains that a giant comet breakup is a major source of short-period comets. By simulating such events and integrating the motions of the fictitious fragments in an accurate solar system model for the typical lifetime of Jupiter family comets, it is possible to check whether the outcome may or may not be compatible with the observed orbital distribution. In this paper we present such integrations for a few typical progenitor orbits and analyze the ensuing thermalization process with particular attention to the Tisserand parameters. It is found that the sets of fragments lose their memory of a common origin very rapidly so that, in general terms, it is difficult to use the random appearance of the observed orbital distribution as evidence against the giant comet splitting hypothesis.

  19. QMCPACK : an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    DOE PAGES

    Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.; ...

    2018-04-19

    QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less

  20. In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models.

    PubMed

    Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie

    2010-06-07

    Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal.

  1. Modeling, Analysis, and Impedance Design of Battery Energy Stored Single-Phase Quasi-Z Source Photovoltaic Inverter System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Yaosuo

    The battery energy stored quasi-Z-source (BES-qZS) based photovoltaic (PV) power generation system combines advantages of the qZS inverter and the battery energy storage system. However, the second harmonic (2 ) power ripple will degrade the system's performance and affect the system's design. An accurate model to analyze the 2 ripple is very important. The existing models did not consider the battery, and with the assumption L1=L2 and C1=C2, which causes the non-optimized design for the impedance parameters of qZS network. This paper proposes a comprehensive model for single-phase BES-qZS-PV inverter system, where the battery is considered and without any restrictionmore » of L1, L2, C1, and C2. A BES-qZS impedance design method based on the built model is proposed to mitigate the 2 ripple. Simulation and experimental results verify the proposed 2 ripple model and design method.« less

  2. The Fast Scattering Code (FSC): Validation Studies and Program Guidelines

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dunn, Mark H.

    2011-01-01

    The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.

  3. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  4. Thermal Damage Analysis in Biological Tissues Under Optical Irradiation: Application to the Skin

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Ortega-Quijano, Noé; Solana-Quirós, José Ramón; Arce-Diego, José Luis

    2009-07-01

    The use of optical sources in medical praxis is increasing nowadays. In this study, different approaches using thermo-optical principles that allow us to predict thermal damage in irradiated tissues are analyzed. Optical propagation is studied by means of the radiation transport theory (RTT) equation, solved via a Monte Carlo analysis. Data obtained are included in a bio-heat equation, solved via a numerical finite difference approach. Optothermal properties are considered for the model to be accurate and reliable. Thermal distribution is calculated as a function of optical source parameters, mainly optical irradiance, wavelength and exposition time. Two thermal damage models, the cumulative equivalent minutes (CEM) 43 °C approach and the Arrhenius analysis, are used. The former is appropriate when dealing with dosimetry considerations at constant temperature. The latter is adequate to predict thermal damage with arbitrary temperature time dependence. Both models are applied and compared for the particular application of skin thermotherapy irradiation.

  5. Statistics of concentrations due to single air pollution sources to be applied in numerical modelling of pollutant dispersion

    NASA Astrophysics Data System (ADS)

    Tumanov, Sergiu

    A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.

  6. QMCPACK : an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.

    QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less

  7. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  8. Magnetoencephalography recording and analysis.

    PubMed

    Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy

    2014-03-01

    Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon.

  9. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  10. Remote Sensing of Aerosol and their Radiative Properties from the MODIS Instrument on EOS-Terra Satellite: First Results and Evaluation

    NASA Technical Reports Server (NTRS)

    Kaufman, Yoram; Tanre, Didier; Remer, Lorraine; Holben, Brent; Lau, William K.-M. (Technical Monitor)

    2001-01-01

    The MODIS instrument was launched on the NASA Terra satellite in Dec. 1999. Since last Oct., the sensor and the aerosol algorithm reached maturity and provide global daily retrievals of aerosol optical thickness and properties. MODIS has 36 spectral channels in the visible to IR with resolution down to 250 m. This allows accurate cloud screening and multi-spectral aerosol retrievals. We derive the aerosol optical thickness over the ocean and most of the land areas, distinguishing between fine (mainly man-made aerosol) and coarse aerosol particles. The information is more precise over the ocean where we derive also the effective radius and scattering asymmetry parameter of the aerosol. New methods to derive the aerosol single scattering albedo are also being developed. These measurements are use to track different aerosol sources, transport and the radiative forcing at the top and bottom of the atmosphere. The AErosol RObotic NETwork of ground based radiometers is used for global validation of the satellite derived optical thickness, size parameters and single scattering albedo and measure additional aerosol parameters that cannot be derived from space.

  11. Implementation and application of an interactive user-friendly validation software for RADIANCE

    NASA Astrophysics Data System (ADS)

    Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.

    2012-02-01

    RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.

  12. Monte Carlo simulation of electrothermal atomization on a desktop personal computer

    NASA Astrophysics Data System (ADS)

    Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.

    1996-07-01

    Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.

  13. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  14. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-09-01

    Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.

  15. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  16. Exploring NASA Satellite Data with High Resolution Visualization

    NASA Astrophysics Data System (ADS)

    Wei, J. C.; Yang, W.; Johnson, J. E.; Shen, S.; Zhao, P.; Gerasimov, I. V.; Vollmer, B.; Vicente, G. A.; Pham, L.

    2013-12-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme event (such as volcano eruption, dust storm, ...etc) interpretation from satellite. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by providing satellite data as ';Images' with accurate pixel-level (Level 2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We will present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting various visualization and data accessing capabilities from satellite Level 2 data (non-aggregated and un-gridded) at high spatial resolution. Functionality will include selecting data sources (e.g., multiple parameters under the same measurement, like NO2 and SO2 from Ozone Monitoring Instrument (OMI), or same parameter with different methods of aggregation, like NO2 in OMNO2G and OMNO2D products), defining area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting and reformatting. The portal interface will connect to the backend services with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. The interface will also be able to connect to other OGC WMS and WCS servers, which will greatly enhance its expandability to integrate additional outside data/map sources.

  17. [Measurement of atomic number of alkali vapor and pressure of buffer gas based on atomic absorption].

    PubMed

    Zheng, Hui-jie; Quan, Wei; Liu, Xiang; Chen, Yao; Lu, Ji-xi

    2015-02-01

    High sensitivitymagnetic measurementscanbe achieved by utilizing atomic spinmanipulation in the spin-exchange-relaxation-free (SERF) regime, which uses an alkali cell as a sensing element. The atomic number density of the alkali vapor and the pressure of the buffer gasare among the most important parameters of the cell andrequire accurate measurement. A method has been proposed and developedto measure the atomic number density and the pressure based on absorption spectroscopy, by sweeping the absorption line and fittingthe experiment data with a Lorentzian profile to obtainboth parameters. Due to Doppler broadening and pressure broadening, which is mainly dominated by the temperature of the cell and the pressure of buffer gas respectively, this work demonstrates a simulation of the errorbetween the peaks of the Lorentzian profile and the Voigt profile caused by bothfactors. The results indicates that the Doppler broadening contribution is insignificant with an error less than 0.015% at 313-513 K for a 4He density of 2 amg, and an error of 0.1% in the presence of 0.6-5 amg at 393 K. We conclude that the Doppler broadening could be ignored under above conditions, and that the Lorentzianprofile is suitably applied to fit the absorption spectrumobtainingboth parameters simultaneously. In addition we discuss the resolution and the instability due to thelight source, wavelength and the temperature of the cell. We find that the cell temperature, whose uncertainty is two orders of magnitude larger than the instability of the light source and the wavelength, is one of the main factors which contributes to the error.

  18. A portable nondestructive detection device of quality and nutritional parameters of meat using Vis/NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Wang, Wenxiu; Peng, Yankun; Wang, Fan; Sun, Hongwei

    2017-05-01

    The improvement of living standards has urged consumers to pay more attention to the quality and nutrition of meat, so the development of nondestructive detection device for quality and nutritional parameters is commercioganic undoubtedly. In this research, a portable device equipped with visible (Vis) and near-infrared (NIR) spectrometers, tungsten halogen lamp, optical fiber, ring light guide and embedded computer was developed to realize simultaneous and fast detection of color (L*, a*, b*), pH, total volatile basic nitrogen (TVB-N), intramuscular fat (IF), protein and water content in pork. The wavelengths of dual-band spectrometers were 400 1100 nm and 940 1650 nm respectively and the tungsten halogen lamp cooperated with ring light guide to form a ring light source and provide appropriate illumination intensity for sample. Software was self-developed to control the functionality of dual-band spectrometers, set spectrometer parameters, acquire and process Vis/NIR spectroscopy and display the prediction results in real time. In order to obtain a robust and accurate prediction model, fresh longissimus dorsi meat was bought and placed in the refrigerator for 12 days to get pork samples with different freshness degrees. Besides, pork meat from three different parts including longissimus dorsi, haunch and lean meat was collected for the determination of IF, protein and water to make the reference values have a wider distribution range. After acquisition of Vis/NIR spectra, data from 400 1100 nm were pretreated with Savitzky-Golay (S-G) filter and standard normal variables transform (SNVT) and spectrum data from 940 1650 nm were preprocessed with SNVT. The anomalous were eliminated by Monte Carlo method based on model cluster analysis and then partial least square regression (PLSR) models based on single band (400 1100 nm or 940 1650 nm) and dual-band were established and compared. The results showed the optimal models for each parameter were built with correlation coefficients in prediction set of 0.9101, 0.9121, 0.8873, 0.9094, 0.9378, 0.9348, 0.9342 and 0.8882, respectively. It indicated this innovative and practical device can be a promising technology for nondestructive, fast and accurate detection of nutritional parameters in meat.

  19. Can Consumers Trust Web-Based Information About Celiac Disease? Accuracy, Comprehensiveness, Transparency, and Readability of Information on the Internet

    PubMed Central

    McNally, Shawna L; Donohue, Michael C; Newton, Kimberly P; Ogletree, Sandra P; Conner, Kristen K; Ingegneri, Sarah E

    2012-01-01

    Background Celiac disease is an autoimmune disease that affects approximately 1% of the US population. Disease is characterized by damage to the small intestinal lining and malabsorption of nutrients. Celiac disease is activated in genetically susceptible individuals by dietary exposure to gluten in wheat and gluten-like proteins in rye and barley. Symptoms are diverse and include gastrointestinal and extraintestinal manifestations. Treatment requires strict adherence to a gluten-free diet. The Internet is a major source of health information about celiac disease. Nonetheless, information about celiac disease that is available on various websites often is questioned by patients and other health care professionals regarding its reliability and content. Objectives To determine the accuracy, comprehensiveness, transparency, and readability of information on 100 of the most widely accessed websites that provide information on celiac disease. Methods Using the search term celiac disease, we analyzed 100 of the top English-language websites published by academic, commercial, nonprofit, and other professional (nonacademic) sources for accuracy, comprehensiveness, transparency, and reading grade level. Each site was assessed independently by 3 reviewers. Website accuracy and comprehensiveness were probed independently using a set of objective core information about celiac disease. We used 19 general criteria to assess website transparency. Website readability was determined by the Flesch-Kincaid reading grade level. Results for each parameter were analyzed independently. In addition, we weighted and combined parameters to generate an overall score, termed website quality. Results We included 98 websites in the final analysis. Of these, 47 (48%) provided specific information about celiac disease that was less than 95% accurate (ie, the predetermined cut-off considered a minimum acceptable level of accuracy). Independent of whether the information posted was accurate, 51 of 98 (52%) websites contained less than 50% of the core celiac disease information that was considered important for inclusion on websites that provide general information about celiac disease. Academic websites were significantly less transparent (P = .005) than commercial websites in attributing authorship, timeliness of information, sources of information, and other important disclosures. The type of website publisher did not predict website accuracy, comprehensiveness, or overall website quality. Only 4 of 98 (4%) websites achieved an overall quality score of 80 or above, which a priori was set as the minimum score for a website to be judged trustworthy and reliable. Conclusions The information on many websites addressing celiac disease was not sufficiently accurate, comprehensive, and transparent, or presented at an appropriate reading grade level, to be considered sufficiently trustworthy and reliable for patients, health care providers, celiac disease support groups, and the general public. This has the potential to adversely affect decision making about important aspects of celiac disease, including its appropriate and proper diagnosis, treatment, and management. PMID:23611901

  20. Can consumers trust web-based information about celiac disease? Accuracy, comprehensiveness, transparency, and readability of information on the internet.

    PubMed

    McNally, Shawna L; Donohue, Michael C; Newton, Kimberly P; Ogletree, Sandra P; Conner, Kristen K; Ingegneri, Sarah E; Kagnoff, Martin F

    2012-04-04

    Celiac disease is an autoimmune disease that affects approximately 1% of the US population. Disease is characterized by damage to the small intestinal lining and malabsorption of nutrients. Celiac disease is activated in genetically susceptible individuals by dietary exposure to gluten in wheat and gluten-like proteins in rye and barley. Symptoms are diverse and include gastrointestinal and extraintestinal manifestations. Treatment requires strict adherence to a gluten-free diet. The Internet is a major source of health information about celiac disease. Nonetheless, information about celiac disease that is available on various websites often is questioned by patients and other health care professionals regarding its reliability and content. To determine the accuracy, comprehensiveness, transparency, and readability of information on 100 of the most widely accessed websites that provide information on celiac disease. Using the search term celiac disease, we analyzed 100 of the top English-language websites published by academic, commercial, nonprofit, and other professional (nonacademic) sources for accuracy, comprehensiveness, transparency, and reading grade level. Each site was assessed independently by 3 reviewers. Website accuracy and comprehensiveness were probed independently using a set of objective core information about celiac disease. We used 19 general criteria to assess website transparency. Website readability was determined by the Flesch-Kincaid reading grade level. Results for each parameter were analyzed independently. In addition, we weighted and combined parameters to generate an overall score, termed website quality. We included 98 websites in the final analysis. Of these, 47 (48%) provided specific information about celiac disease that was less than 95% accurate (ie, the predetermined cut-off considered a minimum acceptable level of accuracy). Independent of whether the information posted was accurate, 51 of 98 (52%) websites contained less than 50% of the core celiac disease information that was considered important for inclusion on websites that provide general information about celiac disease. Academic websites were significantly less transparent (P = .005) than commercial websites in attributing authorship, timeliness of information, sources of information, and other important disclosures. The type of website publisher did not predict website accuracy, comprehensiveness, or overall website quality. Only 4 of 98 (4%) websites achieved an overall quality score of 80 or above, which a priori was set as the minimum score for a website to be judged trustworthy and reliable. The information on many websites addressing celiac disease was not sufficiently accurate, comprehensive, and transparent, or presented at an appropriate reading grade level, to be considered sufficiently trustworthy and reliable for patients, health care providers, celiac disease support groups, and the general public. This has the potential to adversely affect decision making about important aspects of celiac disease, including its appropriate and proper diagnosis, treatment, and management.

  1. Accurate acoustic power measurement for low-intensity focused ultrasound using focal axial vibration velocity

    NASA Astrophysics Data System (ADS)

    Tao, Chenyang; Guo, Gepu; Ma, Qingyu; Tu, Juan; Zhang, Dong; Hu, Jimin

    2017-07-01

    Low-intensity focused ultrasound is a form of therapy that can have reversible acoustothermal effects on biological tissue, depending on the exposure parameters. The acoustic power (AP) should be chosen with caution for the sake of safety. To recover the energy of counteracted radial vibrations at the focal point, an accurate AP measurement method using the focal axial vibration velocity (FAVV) is proposed in explicit formulae and is demonstrated experimentally using a laser vibrometer. The experimental APs for two transducers agree well with theoretical calculations and numerical simulations, showing that AP is proportional to the square of the FAVV, with a fixed power gain determined by the physical parameters of the transducers. The favorable results suggest that the FAVV can be used as a valuable parameter for non-contact AP measurement, providing a new strategy for accurate power control for low-intensity focused ultrasound in biomedical engineering.

  2. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  3. Laser diode absorption spectroscopy for accurate CO(2) line parameters at 2 microm: consequences for space-based DIAL measurements and potential biases.

    PubMed

    Joly, Lilian; Marnas, Fabien; Gibert, Fabien; Bruneau, Didier; Grouiez, Bruno; Flamant, Pierre H; Durry, Georges; Dumelie, Nicolas; Parvitte, Bertrand; Zéninari, Virginie

    2009-10-10

    Space-based active sensing of CO(2) concentration is a very promising technique for the derivation of CO(2) surface fluxes. There is a need for accurate spectroscopic parameters to enable accurate space-based measurements to address global climatic issues. New spectroscopic measurements using laser diode absorption spectroscopy are presented for the preselected R30 CO(2) absorption line ((20(0)1)(III)<--(000) band) and four others. The line strength, air-broadening halfwidth, and its temperature dependence have been investigated. The results exhibit significant improvement for the R30 CO(2) absorption line: 0.4% on the line strength, 0.15% on the air-broadening coefficient, and 0.45% on its temperature dependence. Analysis of potential biases of space-based DIAL CO(2) mixing ratio measurements associated to spectroscopic parameter uncertainties are presented.

  4. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  5. 3D Reconstruction and Approximation of Vegetation Geometry for Modeling of Within-canopy Flows

    NASA Astrophysics Data System (ADS)

    Henderson, S. M.; Lynn, K.; Lienard, J.; Strigul, N.; Mullarney, J. C.; Norris, B. K.; Bryan, K. R.

    2016-02-01

    Aquatic vegetation can shelter coastlines from waves and currents, sometimes resulting in accretion of fine sediments. We developed a photogrammetric technique for estimating the key geometric vegetation parameters that are required for modeling of within-canopy flows. Accurate estimates of vegetation geometry and density are essential to refine hydrodynamic models, but accurate, convenient, and time-efficient methodologies for measuring complex canopy geometries have been lacking. The novel approach presented here builds on recent progress in photogrammetry and computer vision. We analyzed the geometry of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Although comparatively thin, pneumatophores are more numerous than mangrove trunks, and thus influence near bed flow and sediment transport. Quadrats (1 m2) were placed at low tide among pneumatophores. Roots were counted and measured for height and diameter. Photos were taken from multiple angles around each quadrat. Relative camera locations and orientations were estimated from key features identified in multiple images using open-source software (VisualSfM). Next, a dense 3D point cloud was produced. Finally, algorithms were developed for automated estimation of pneumatophore geometry from the 3D point cloud. We found good agreement between hand-measured and photogrammetric estimates of key geometric parameters, including mean stem diameter, total number of stems, and frontal area density. These methods can reduce time spent measuring in the field, thereby enabling future studies to refine models of water flows and sediment transport within heterogenous vegetation canopies.

  6. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  7. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE PAGES

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; ...

    2018-04-18

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  8. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  9. Stripline split-ring resonator with integrated optogalvanic sample cell

    NASA Astrophysics Data System (ADS)

    Persson, Anders; Berglund, Martin; Thornell, Greger; Possnert, Göran; Salehpour, Mehran

    2014-04-01

    Intracavity optogalvanic spectroscopy (ICOGS) has been proposed as a method for unambiguous detection of rare isotopes. Of particular interest is 14C, where detection of extremely low concentrations in the 1:1015 range (14C: 12C), is of interest in, e.g., radiocarbon dating and pharmaceutical sciences. However, recent reports show that ICOGS suffers from substantial problems with reproducibility. To qualify ICOGS as an analytical method, more stable and reliable plasma generation and signal detection are needed. In our proposed setup, critical parameters have been improved. We have utilized a stripline split-ring resonator microwave-induced microplasma source to excite and sustain the plasma. Such a microplasma source offers several advantages over conventional ICOGS plasma sources. For example, the stripline split-ring resonator concept employs separated plasma generation and signal detection, which enables sensitive detection at stable plasma conditions. The concept also permits in situ observation of the discharge conditions, which was found to improve reproducibility. Unique to the stripline split-ring resonator microplasma source in this study, is that the optogalvanic sample cell has been embedded in the device itself. This integration enables improved temperature control and more stable and accurate signal detection. Significant improvements are demonstrated, including reproducibility, signal-to-noise ratio, and precision.

  10. Calibration of the Regional Crustal Waveguide and the Retrieval of Source Parameters Using Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Woods, B. B.; Thio, H. K.

    - Regional crustal waveguide calibration is essential to the retrieval of source parameters and the location of smaller (M<4.8) seismic events. This path calibration of regional seismic phases is strongly dependent on the accuracy of hypocentral locations of calibration (or master) events. This information can be difficult to obtain, especially for smaller events. Generally, explosion or quarry blast generated travel-time data with known locations and origin times are useful for developing the path calibration parameters, but in many regions such data sets are scanty or do not exist. We present a method which is useful for regional path calibration independent of such data, i.e. with earthquakes, which is applicable for events down to Mw = 4 and which has successfully been applied in India, central Asia, western Mediterranean, North Africa, Tibet and the former Soviet Union. These studies suggest that reliably determining depth is essential to establishing accurate epicentral location and origin time for events. We find that the error in source depth does not necessarily trade-off only with the origin time for events with poor azimuthal coverage, but with the horizontal location as well, thus resulting in poor epicentral locations. For example, hypocenters for some events in central Asia were found to move from their fixed-depth locations by about 20km. Such errors in location and depth will propagate into path calibration parameters, particularly with respect to travel times. The modeling of teleseismic depth phases (pP, sP) yields accurate depths for earthquakes down to magnitude Mw = 4.7. This Mwthreshold can be lowered to four if regional seismograms are used in conjunction with a calibrated velocity structure model to determine depth, with the relative amplitude of the Pnl waves to the surface waves and the interaction of regional sPmP and pPmP phases being good indicators of event depths. We also found that for deep events a seismic phase which follows an S-wave path to the surface and becomes critical, developing a head wave by S to P conversion is also indicative of depth. The detailed characteristic of this phase is controlled by the crustal waveguide. The key to calibrating regionalized crustal velocity structure is to determine depths for a set of master events by applying the above methods and then by modeling characteristic features that are recorded on the regional waveforms. The regionalization scheme can also incorporate mixed-path crustal waveguide models for cases in which seismic waves traverse two or more distinctly different crustal structures. We also demonstrate that once depths are established, we need only two-stations travel-time data to obtain reliable epicentral locations using a new adaptive grid-search technique which yields locations similar to those determined using travel-time data from local seismic networks with better azimuthal coverage.

  11. Ionospheric scintillation studies

    NASA Technical Reports Server (NTRS)

    Rino, C. L.; Freemouw, E. J.

    1973-01-01

    The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.

  12. Modelling short channel mosfets for use in VLSI

    NASA Technical Reports Server (NTRS)

    Klafter, Alex; Pilorz, Stuart; Polosa, Rosa Loguercio; Ruddock, Guy; Smith, Andrew

    1986-01-01

    In an investigation of metal oxide semiconductor field effect transistor (MOFSET) devices, a one-dimensional mathematical model of device dynamics was prepared, from which an accurate and computationally efficient drain current expression could be derived for subsequent parameter extraction. While a critical review revealed weaknesses in existing 1-D models (Pao-Sah, Pierret-Shields, Brews, and Van de Wiele), this new model in contrast was found to allow all the charge distributions to be continuous, to retain the inversion layer structure, and to include the contribution of current from the pinched-off part of the device. The model allows the source and drain to operate in different regimes. Numerical algorithms used for the evaluation of surface potentials in the various models are presented.

  13. One-step model of photoemission from single-crystal surfaces

    DOE PAGES

    Karkare, Siddharth; Wan, Weishi; Feng, Jun; ...

    2017-02-28

    In our paper, we present a three-dimensional one-step photoemission model that can be used to calculate the quantum efficiency and momentum distributions of electrons photoemitted from ordered single-crystal surfaces close to the photoemission threshold. Using Ag(111) as an example, we also show that the model can not only calculate the quantum efficiency from the surface state accurately without using any ad hoc parameters, but also provides a theoretical quantitative explanation of the vectorial photoelectric effect. This model in conjunction with other band structure and wave function calculation techniques can be effectively used to screen single-crystal photoemitters for use as electronmore » sources for particle accelerator and ultrafast electron diffraction applications.« less

  14. Global Precipitation Measurement: Methods, Datasets and Applications

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco; Turk, Francis J.; Petersen, Walt; Hou, Arthur Y.; Garcia-Ortega, Eduardo; Machado, Luiz, A. T.; Angelis, Carlos F.; Salio, Paola; Kidd, Chris; Huffman, George J.; hide

    2011-01-01

    This paper reviews the many aspects of precipitation measurement that are relevant to providing an accurate global assessment of this important environmental parameter. Methods discussed include ground data, satellite estimates and numerical models. First, the methods for measuring, estimating, and modeling precipitation are discussed. Then, the most relevant datasets gathering precipitation information from those three sources are presented. The third part of the paper illustrates a number of the many applications of those measurements and databases. The aim of the paper is to organize the many links and feedbacks between precipitation measurement, estimation and modeling, indicating the uncertainties and limitations of each technique in order to identify areas requiring further attention, and to show the limits within which datasets can be used.

  15. Thermal radiation and mass transfer effects on unsteady MHD free convection flow past a vertical oscillating plate

    NASA Astrophysics Data System (ADS)

    Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.

    2017-06-01

    Unsteady MHD free convection flow past a vertical porous plate in porous medium with radiation, diffusion thermo, thermal diffusion and heat source are analyzed. The governing non-linear, partial differential equations are transformed into dimensionless by using non-dimensional quantities. Then the resultant dimensionless equations are solved numerically by applying an efficient, accurate and conditionally stable finite difference scheme of explicit type with the help of a computer programming language Compaq Visual Fortran. The stability and convergence analysis has been carried out to establish the effect of velocity, temperature, concentration, skin friction, Nusselt number, Sherwood number, stream lines and isotherms line. Finally, the effects of various parameters are presented graphically and discussed qualitatively.

  16. libprofit: Image creation from luminosity profiles

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D.; Tobar, R.

    2016-12-01

    libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

  17. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  18. Non-LTE line formation in a magnetic field. I. Noncoherent scattering and true absorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domke, H.; Staude, J.

    1973-08-01

    The formation of a Zeeman-multiplet by noncoherent scattering and true absorption in a Milne-- Eddington atmosphere is considered assuming a homogeneous magnetic field and complete depolarization of the atomic line levels. The transfer equation for the Stokes parameters is transformed into a scalar integral equation of the Wiener-- Hopf type which is solved by Sobolev's method in closed form. The influence of the magnetic field on the mean scattering number in an infinite medium is discussed. The solution of the line formation problem is obtained for a Planckian source fruction. This solution may be simplified by making the ''finite fieldmore » approximation'', which should be sufficiently accurate for practical purposes. (auth)« less

  19. Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions

    NASA Astrophysics Data System (ADS)

    Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.

    2002-02-01

    Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.

  20. Adaptive super twisting vibration control of a flexible spacecraft with state rate estimation

    NASA Astrophysics Data System (ADS)

    Malekzadeh, Maryam; Karimpour, Hossein

    2018-05-01

    The robust attitude and vibration control of a flexible spacecraft trying to perform accurate maneuvers in spite of various sources of uncertainty is addressed here. Difficulties for achieving precise and stable pointing arise from noisy onboard sensors, parameters indeterminacy, outer disturbances as well as un-modeled or hidden dynamics interactions. Based on high-order sliding-mode methods, the non-minimum phase nature of the problem is dealt with through output redefinition. An adaptive super-twisting algorithm (ASTA) is incorporated with its observer counterpart on the system under consideration to get reliable attitude and vibration control in the presence of sensor noise and momentum coupling. The closed-loop efficiency is verified through simulations under various indeterminate situations and got compared to other methods.

  1. Carbon Dioxide Line Shapes for Atmospheric Remote Sensing

    NASA Astrophysics Data System (ADS)

    Predoi-Cross, Adriana; Ibrahim, Amr; Wismath, Alice; Teillet, Philippe M.; Devi, V. Malathy; Benner, D. Chris; Billinghurst, Brant

    2010-02-01

    We present a detailed spectroscopic study of carbon dioxide in support of atmospheric remote sensing. We have studied two weak absorption bands near the strong ν2 band that is used to derive atmospheric temperature profiles. We have analyzed our laboratory spectra recorded with the synchrotron and globar sources with spectral line profiles that reproduce the absorption features with high accuracy. The Q-branch transitions exhibited asymmetric line shape due to weak line-mixing. For these weak transitions, we have retrieved accurate experimental line strengths, self- and air-broadening, self- and air-induced shift coefficients and weak line mixing parameters. The experimental precision is sufficient to reveal inherent variations of the width and shift coefficients according to transition quantum numbers.

  2. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  3. Radiative Transfer in a Translucent Cloud Illuminated by an Extended Background Source

    NASA Astrophysics Data System (ADS)

    Biganzoli, Davide; Potenza, Marco A. C.; Robberto, Massimo

    2017-05-01

    We discuss the radiative transfer theory for translucent clouds illuminated by an extended background source. First, we derive a rigorous solution based on the assumption that multiple scatterings produce an isotropic flux. Then we derive a more manageable analytic approximation showing that it nicely matches the results of the rigorous approach. To validate our model, we compare our predictions with accurate laboratory measurements for various types of well-characterized grains, including purely dielectric and strongly absorbing materials representative of astronomical icy and metallic grains, respectively, finding excellent agreement without the need to add free parameters. We use our model to explore the behavior of an astrophysical cloud illuminated by a diffuse source with dust grains having parameters typical of the classic ISM grains of Draine & Lee and protoplanetary disks, with an application to the dark silhouette disk 114-426 in Orion Nebula. We find that the scattering term modifies the transmitted radiation, both in terms of intensity (extinction) and shape (reddening) of the spectral distribution. In particular, for small optical thickness, our results show that scattering makes reddening almost negligible at visible wavelengths. Once the optical thickness increases enough and the probability of scattering events becomes close to or larger than 1, reddening becomes present but is appreciably modified with respect to the standard expression for line-of-sight absorption. Moreover, variations of the grain refractive index, in particular the amount of absorption, also play an important role in changing the shape of the spectral transmission curve, with dielectric grains showing the minimum amount of reddening.

  4. IN-SYNC. IV. The Young Stellar Population in the Orion A Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Da Rio, Nicola; Tan, Jonathan C.; Covey, Kevin R.; Cottaar, Michiel; Foster, Jonathan B.; Cullen, Nicholas C.; Tobin, John J.; Kim, Jinyoung S.; Meyer, Michael R.; Nidever, David L.; Stassun, Keivan G.; Chojnowski, S. Drew; Flaherty, Kevin M.; Majewski, Steve; Skrutskie, Michael F.; Zasowski, Gail; Pan, Kaike

    2016-02-01

    We present the results of the Sloan Digital Sky Survey APOGEE INfrared Spectroscopy of Young Nebulous Clusters program (IN-SYNC) survey of the Orion A molecular cloud. This survey obtained high-resolution near-infrared spectroscopy of about 2700 young pre-main-sequence stars on a ˜ 6^\\circ field of view. We have measured accurate stellar parameters ({T}{{eff}}, {log}g, v{sin}I) and extinctions and placed the sources in the Hertzsprung-Russel diagram (HRD). We have also extracted radial velocities for the kinematic characterization of the population. We compare our measurements with literature results to assess the performance and accuracy of the survey. Source extinction shows evidence for dust grains that are larger than those in the diffuse interstellar medium: we estimate an average RV = 5.5 in the region. Importantly, we find a clear correlation between HRD inferred ages and spectroscopic surface-gravity-inferred ages and between extinction and disk presence; this strongly suggests a real spread of ages larger than a few Myr. Focusing on the young population around NGC 1980/ι Ori, which has previously been suggested to be a separate, foreground, older cluster, we confirm its older (˜5 Myr) age and low AV, but considering that its radial velocity distribution is indistinguishable from Orion A’s population, we suggest that NGC 1980 is part of Orion A’s star formation activity. Based on their stellar parameters and kinematic properties, we identify 383 new candidate members of Orion A, most of which are diskless sources in areas of the region poorly studied by previous works.

  5. [Comparison between administrative and clinical databases in the evaluation of cardiac surgery performance].

    PubMed

    Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia

    2008-08-01

    The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.

  6. The phylogeny of quasars and the ontogeny of their central black holes

    NASA Astrophysics Data System (ADS)

    Fraix-Burnet, Didier; Marziani, Paola; D'Onofrio, Mauro; Dultzin, Deborah

    2017-02-01

    The connection between multifrequency quasar observational and physical parameters related to accretion processes is still open to debate. In the last 20 year, Eigenvector 1-based approaches developed since the early papers by Boroson and Green (1992) and Sulentic et al. (2000b) have been proved to be a remarkably powerful tool to investigate this issue, and have led to the definition of a quasar "main sequence". In this paper we perform a cladistic analysis on two samples of 215 and 85 low-z quasars (z ~ 0.7) which were studied in several previous works and which offer a satisfactory coverage of the Eigenvector 1-derived main sequence. The data encompass accurate measurements of observational parameters which represents key aspects associated with the structural diversity of quasars. Cladistics is able to group sources radiating at higher Eddington ratios, as well as to separate radio-quiet (RQ) and radio-loud (RL) quasars. The analysis suggests a black hole mass threshold for powerful radio emission and also properly distinguishes core-dominated and lobe-dominated quasars, in accordance with the basic tenet of RL unification schemes. Considering that black hole mass provides a sort of "arrow of time" of nuclear activity, a phylogenetic interpretation becomes possible if cladistic trees are rooted on black hole mass: the ontogeny of black holes is represented by their monotonic increase in mass. More massive radio-quiet Population B sources at low-z become a more evolved counterpart of Population A i.e., wind dominated sources to which the "local" Narrow-Line Seyfert 1s belong.

  7. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  8. Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa

    2018-05-01

    The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.

  9. Deriving stellar parameters with the SME software package

    NASA Astrophysics Data System (ADS)

    Piskunov, N.

    2017-09-01

    Photometry and spectroscopy are complementary tools for deriving accurate stellar parameters. Here I present one of the popular packages for stellar spectroscopy called SME with the emphasis on the latest developments and error assessment for the derived parameters.

  10. SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R E; Mayeda, K; Walter, W R

    2007-07-10

    The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less

  11. Location of acoustic emission sources generated by air flow

    PubMed

    Kosel; Grabec; Muzic

    2000-03-01

    The location of continuous acoustic emission sources is a difficult problem of non-destructive testing. This article describes one-dimensional location of continuous acoustic emission sources by using an intelligent locator. The intelligent locator solves a location problem based on learning from examples. To verify whether continuous acoustic emission caused by leakage air flow can be located accurately by the intelligent locator, an experiment on a thin aluminum band was performed. Results show that it is possible to determine an accurate location by using a combination of a cross-correlation function with an appropriate bandpass filter. By using this combination, discrete and continuous acoustic emission sources can be located by using discrete acoustic emission sources for locator learning.

  12. Stochastic inversion of cross-borehole radar data from metalliferous vein detection

    NASA Astrophysics Data System (ADS)

    Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui

    2017-12-01

    In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.

  13. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  14. Phase contrast imaging simulation and measurements using polychromatic sources with small source-object distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca

    Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less

  15. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less

  16. Bayesian resolution of TEM, CSEM and MT soundings: a comparative study

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    We examine the resolution of three electromagnetic exploration methods commonly used to map the electrical conductivity of the shallow crust - the magnetotelluric (MT) method, the controlled-source electromagnetic (CSEM) method and the transient electromagnetic (TEM) method. TEM and CSEM utilize an artificial source of EM energy, while MT makes use of natural variations in the Earth's electromagnetic field. For a given geological setting and acquisition parameters, each of these methods will have a different resolution due to differences in the source field polarization and the frequency range of the measurements. For example, the MT and TEM methods primarily rely on induced horizontal currents and are most sensitive to conductive layers while the CSEM method generates vertical loops of current and is more sensitive to resistive features. Our study seeks to provide a robust resolution comparison that can help inform exploration geophysicists about which technique is best suited for a particular target. While it is possible to understand and describe a difference in resolution qualitatively, it remains challenging to fully describe it quantitatively using optimization based approaches. Part of the difficulty here stems from the standard electromagnetic inversion toolkit, which makes heavy use of regularization (often in the form of smoothing) to constrain the non-uniqueness inherent in the inverse problem. This regularization makes it difficult to accurately estimate the uncertainty in estimated model parameters - and therefore obscures their true resolution. To overcome this difficulty, we compare the resolution of CSEM, airborne TEM, and MT data quantitatively using a Bayesian trans-dimensional Markov chain Monte Carlo (McMC) inversion scheme. Noisy synthetic data for this study are computed from various representative 1D test models: a conductive anomaly under a conductive/resistive overburden; and a resistive anomaly under a conductive/resistive overburden. In addition to obtaining the full posterior probability density function of the model parameters, we develop a metric to more directly compare the resolution of each method as a function of depth.

  17. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  18. Improved Overpressure Recording and Modeling for Near-Surface Explosion Forensics

    NASA Astrophysics Data System (ADS)

    Kim, K.; Schnurr, J.; Garces, M. A.; Rodgers, A. J.

    2017-12-01

    The accurate recording and analysis of air-blast acoustic waveforms is a key component of the forensic analysis of explosive events. Smartphone apps can enhance traditional technologies by providing scalable, cost-effective ubiquitous sensor solutions for monitoring blasts, undeclared activities, and inaccessible facilities. During a series of near-surface chemical high explosive tests, iPhone 6's running the RedVox infrasound recorder app were co-located with high-fidelity Hyperion overpressure sensors, allowing for direct comparison of the resolution and frequency content of the devices. Data from the traditional sensors is used to characterize blast signatures and to determine relative iPhone microphone amplitude and phase responses. A Wiener filter based source deconvolution method is applied, using a parameterized source function estimated from traditional overpressure sensor data, to estimate system responses. In addition, progress on a new parameterized air-blast model is presented. The model is based on the analysis of a large set of overpressure waveforms from several surface explosion test series. An appropriate functional form with parameters determined empirically from modern air-blast and acoustic data will allow for better parameterization of signals and the improved characterization of explosive sources.

  19. Time-distance domain transformation for Acoustic Emission source localization in thin metallic plates.

    PubMed

    Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel

    2016-05-01

    Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Photometric Redshift Calibration Strategy for WFIRST Cosmology

    NASA Astrophysics Data System (ADS)

    Hemmati, Shoubaneh; WFIRST, WFIRST-HLS-COSMOLOGY

    2018-01-01

    In order for WFIRST and other Stage IV Dark energy experiments (e.g. LSST, Euclid) to infer cosmological parameters not limited by systematic errors, accurate redshift measurements are needed. This accuracy can only be met using spectroscopic subsamples to calibrate the full sample. In this poster, we employ the machine leaning, SOM based spectroscopic sampling technique developed in Masters et al. 2015, using the empirical color-redshift relation among galaxies to find the minimum spectra required for the WFIRST weak lensing calibration. We use galaxies from the CANDELS survey to build the LSST+WFIRST lensing analog sample of ~36k objects and train the LSST+WFIRST SOM. We show that 26% of the WFIRST lensing sample consists of sources fainter than the Euclid depth in the optical, 91% of which live in color cells already occupied by brighter galaxies. We demonstrate the similarity between faint and bright galaxies as well as the feasibility of redshift measurements at different brightness levels. 4% of SOM cells are however only occupied by faint galaxies for which we recommend extra spectroscopy of ~200 new sources. Acquiring the spectra of these sources will enable the comprehensive calibration of the WFIRST color-redshift relation.

  1. Sonoelastographic imaging of interference patterns for estimation of the shear velocity of homogeneous biomaterials

    NASA Astrophysics Data System (ADS)

    Wu, Zhe; Taylor, Lawrence S.; Rubens, Deborah J.; Parker, Kevin J.

    2004-03-01

    The shear wave velocity is one of a few important parameters that characterize the mechanical properties of bio-materials. In this paper, two noninvasive methods are proposed to measure the shear velocity by inspecting the shear wave interference patterns. In one method, two shear wave sources are placed on the opposite two sides of a sample, driven by the identical sinusoidal signals. The shear waves from the two sources interact to create interference patterns, which are visualized by the vibration sonoelastography technique. The spacing between the pattern bands equals half of the shear wavelength. The shear velocity can be obtained by taking the product of the wavelength and the frequency. An alternative method is to drive the two vibration sources at slightly different frequencies. In this case, the interference patterns no longer remain stationary. It is proved that the apparent velocity of the moving patterns is proportional to the shear velocity in the medium. Since the apparent velocity of the patterns can be measured by analysing the video sequence, the shear velocity can be obtained thereafter. These approaches are validated by a conventional shear wave time-of-flight approach, and they are accurate within 4% on various homogeneous tissue-mimicking phantoms.

  2. Determination of Spatial Distribution of Air Pollution by Dye Laser Measurement of Differential Absorption of Elastic Backscatter

    NASA Technical Reports Server (NTRS)

    Ahmed, S. A.; Gergely, J. S.

    1973-01-01

    This paper presents the results of an analytical study of a lidar system which uses tunable organic dye lasers to accurately determine spatial distribution of molecular air pollutants. Also described will be experimental work to date on simultaneous multiwavelength output dye laser sources for this system. Basically the scheme determines the concentration of air pollutants by measuring the differential absorption of an (at least) two wavelength lidar signal elastically backscattered by the atmosphere. Only relative measurements of the backscattered intensity at each of the two wavelengths, one on and one off the resonance absorption of the pollutant in question, are required. The various parameters of the scheme are examined and the component elements required for a system of this type discussed, with emphasis on the dye laser source. Potential advantages of simultaneous multiwavelength outputs are described. The use of correlation spectroscopy in this context is examined. Comparisons are also made for the use of infrared probing wavelengths and sources instead of dye lasers. Estimates of the sensitivity and accuracy of a practical dye laser system of this type, made for specific pollutants, snow it to have inherent advantages over other schemes for determining pollutant spatial distribution.

  3. EUV laser produced and induced plasmas for nanolithography

    NASA Astrophysics Data System (ADS)

    Sizyuk, Tatyana; Hassanein, Ahmed

    2017-10-01

    EUV produced plasma sources are being extensively studied for the development of new technology for computer chips production. Challenging tasks include optimization of EUV source efficiency, producing powerful source in 2 percentage bandwidth around 13.5 nm for high volume manufacture (HVM), and increasing the lifetime of collecting optics. Mass-limited targets, such as small droplet, allow to reduce contamination of chamber environment and mirror surface damage. However, reducing droplet size limits EUV power output. Our analysis showed the requirement for the target parameters and chamber conditions to achieve 500 W EUV output for HVM. The HEIGHTS package was used for the simulations of laser produced plasma evolution starting from laser interaction with solid target, development and expansion of vapor/plasma plume with accurate optical data calculation, especially in narrow EUV region. Detailed 3D modeling of mix environment including evolution and interplay of plasma produced by lasers from Sn target and plasma produced by in-band and out-of-band EUV radiation in ambient gas, used for the collecting optics protection and cleaning, allowed predicting conditions in entire LPP system. Effect of these conditions on EUV photon absorption and collection was analyzed. This work is supported by the National Science Foundation, PIRE project.

  4. Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.

    PubMed

    Klotz, Dino; Grave, Daniel A; Rothschild, Avner

    2017-08-09

    The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for measurements carried out with the same light source, light intensity and potential. However, there is a considerable spread in the results obtained by different methods. For accurate determination of η t , we recommend IMPS measurements in operando with a bias light intensity such that the irradiance is as close as possible to the AM1.5 Global solar spectrum.

  5. The Rupture Characteristic of 1999 Izmit Sequence Using IRIS Data

    NASA Astrophysics Data System (ADS)

    Konca, A. O.; Helmberger, D. V.; Ji, C.; Tan, Y.

    2003-12-01

    The standard source studies use teleseismic data (30° to 90° ) to analyze earthquakes. Therefore, only a limited portion of the focal sphere is involved in source determinations. Furthermore, the locations and origin times of events remain incompatible with local determinations. Here, we attempt to resolve such issues by using IRIS data at all distances, leading to more accurate and detailed rupture properties and accurate relative locations. The 1999 Izmit earthquake sequence is chosen to test our method. The challenge of using data outside the conventional teleseismic distance range is that the arrival times and waveforms are affected more by the Earth structure. We overcome this difficulty by calibrating the path effects for the mainshock using the simpler aftershocks. Therefore, it is crucial to determine the source parameters of the aftershock. We constructed a Green's function library from a regionalized 1-D model and performed a grid search to establish the depth and fault parameters based on waveform matching for the Pnl waves between the synthetics and data, allowing the synthetics in each station to shift separately to account for the path effect. Our results show that the earthquake depth was around 7 km, rather than 19 km from local observatory (Kandilli) and 15 km from the Harvard's CMT solution. The best focal mechanism has a strike of 263° , a dip of 65° , and a rake of 180° , which is very close to the Harvard's CMT solution. The waveform fits of this aftershock is then used as a criterion to select useful source-station paths. A path with a cross-correlation value above 90% between data and synthetics is defined as a "good path" and can be used for studying the Izmit and Duzce earthquakes. We find that the stations in Central Europe and some of the Greek Islands are "good paths", while the stations in Northeast Africa and Italy cannot be used. The time shifts that give the best cross-correlation values are used to calibrate the picks of the Izmit and Duzce events. We realize that this is a very objective way to pick arrival times. However, our preliminary inversions using teleseismic data for Duzce and Izmit events show that handpicked P and S arrival times of the same station from two very close events are not always well correlated. Obviously, how we pick the arrival time governs the rupture pattern and rupture velocity. Therefore, our methodology brings a more objective approach to pick the travel times. To the end, we will invert for the source history of the Duzce and Izmit earthquakes with the regional data and compare with the inversion result using teleseismic data. Moreover, predictions of the teleseismic data, using the solution from the inversion using regional phases will be presented.

  6. Development of Partially-Coherent Wavefront Propagation Simulation Methods for 3rd and 4th Generation Synchrotron Radiation Sources.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubar O.; Berman, L; Chu, Y.S.

    2012-04-04

    Partially-coherent wavefront propagation calculations have proven to be feasible and very beneficial in the design of beamlines for 3rd and 4th generation Synchrotron Radiation (SR) sources. These types of calculations use the framework of classical electrodynamics for the description, on the same accuracy level, of the emission by relativistic electrons moving in magnetic fields of accelerators, and the propagation of the emitted radiation wavefronts through beamline optical elements. This enables accurate prediction of performance characteristics for beamlines exploiting high SR brightness and/or high spectral flux. Detailed analysis of radiation degree of coherence, offered by the partially-coherent wavefront propagation method, ismore » of paramount importance for modern storage-ring based SR sources, which, thanks to extremely small sub-nanometer-level electron beam emittances, produce substantial portions of coherent flux in X-ray spectral range. We describe the general approach to partially-coherent SR wavefront propagation simulations and present examples of such simulations performed using 'Synchrotron Radiation Workshop' (SRW) code for the parameters of hard X-ray undulator based beamlines at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. These examples illustrate general characteristics of partially-coherent undulator radiation beams in low-emittance SR sources, and demonstrate advantages of applying high-accuracy physical-optics simulations to the optimization and performance prediction of X-ray optical beamlines in these new sources.« less

  7. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Bayesian approach to analyzing holograms of colloidal particles.

    PubMed

    Dimiduk, Thomas G; Manoharan, Vinothan N

    2016-10-17

    We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.

  9. Measuring Parameters of Massive Black Hole Binaries with Partially Aligned Spins

    NASA Technical Reports Server (NTRS)

    Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.

    2011-01-01

    The future space-based gravitational wave detector LISA will be able to measure parameters of coalescing massive black hole binaries, often to extremely high accuracy. Previous work has demonstrated that the black hole spins can have a strong impact on the accuracy of parameter measurement. Relativistic spin-induced precession modulates the waveform in a manner which can break degeneracies between parameters, in principle significantly improving how well they are measured. Recent studies have indicated, however, that spin precession may be weak for an important subset of astrophysical binary black holes: those in which the spins are aligned due to interactions with gas. In this paper, we examine how well a binary's parameters can be measured when its spins are partially aligned and compare results using waveforms that include higher post-Newtonian harmonics to those that are truncated at leading quadrupole order. We find that the weakened precession can substantially degrade parameter estimation, particularly for the "extrinsic" parameters sky position and distance. Absent higher harmonics, LISA typically localizes the sky position of a nearly aligned binary about an order of magnitude less accurately than one for which the spin orientations are random. Our knowledge of a source's sky position will thus be worst for the gas-rich systems which are most likely to produce electromagnetic counterparts. Fortunately, higher harmonics of the waveform can make up for this degradation. By including harmonics beyond the quadrupole in our waveform model, we find that the accuracy with which most of the binary's parameters are measured can be substantially improved. In some cases, the improvement is such that they are measured almost as well as when the binary spins are randomly aligned.

  10. Comparative Analysis of Methods of Evaluating the Lower Ionosphere Parameters by Tweek Atmospherics

    NASA Astrophysics Data System (ADS)

    Krivonos, A. P.; Shvets, A. V.

    2016-12-01

    Purpose: A comparative analysis of the phase and frequency methods for determining the Earth-ionosphere effective waveguide heights for the basic and higher types of normal waves (modes) and distance to the source of radiation - lightning - has been made by analyzing pulse signals in the ELF-VLF range - tweek-atmospherics (tweeks). Design/methodology/approach: To test the methods in computer simulations, the tweeks waveforms were synthesized for the Earth-ionosphere waveguide model with the exponential conductivity profile of the lower ionosphere. The calculations were made for a 20-40 dB signal/noise ratio. Findings: The error of the frequency method of determining the effective height of the waveguide for different waveguide modes was less than 0.5 %. The error of the phase method for determining the effective height of the waveguide was less than 0.8 %. Errors in determining the distance to the lightning was less than 1 % for the phase method, and less than 5 % for the frequency method for the source ranges 1000-3000 km. Conclusions: The analysis results have showed the accuracy of the frequency and phase methods being practically the same within distances of 1000-3000 km. For distances less than 1000 km, the phase method shows a more accurate evaluation of the range, so the combination of the two methods can be used to improve estimating the tweek’s propagation path parameters.

  11. Transport of Cryptosporidium, Giardia, Source-specific Indicator Organisms, and Standard Water Quality Constituents During Storm Events

    NASA Astrophysics Data System (ADS)

    Sturdevant-Rees, P. L.; Bourdeau, D.; Baker, R.; Long, S. C.; Barten, P. K.

    2004-05-01

    Microbial and water-quality measurements are collected during storm events under a variety of meteorological and land-use conditions in order to 1) identify risk of Cryptosporidium oocysts, Giardia cysts and other constituents, including microbial indicator organisms, entering surface waters from various land uses during periods of surface runoff; 2) optimize storm sampling procedures for these parameters; and 3) optimize strategies for accurate determination of constituent loads. The investigation is focused on four isolated land uses: forested with free ranging wildlife, beaver influenced forested with free ranging wildlife, residential/commercial, and dairy farm grazing/pastureland using an upstream and downstream sampling strategy. Traditional water-quality analyses include pH, temperature, turbidity, conductivity, total suspended solids, total phosphorus, total Kjeldahl-nitrogen, and ammonia nitrogen, Giardia cysts and Cryptosporidium oocysts. Total coliforms and fecal coliforms are measured as industry standard microbial analyses. Sorbitol-fermenting Bifidobacteria, Rhodococcus coprophilus, Clostridium perfringens spores, and Somatic and F-specific coliphages are measured at select sites as potential alternative source-specific indicator organisms. Upon completion of the project, the final database will consist of wet weather transport data for a set of parameters during twenty-four distinct storm-events in addition to monthly baseline data. A subset of the results to date will be presented, with focus placed on demonstrating the impact of beaver on constituent loadings over a variety of hydrologic and meteorological conditions.

  12. Multi-parameter Nonlinear Gain Correction of X-ray Transition Edge Sensors for the X-ray Integral Field Unit

    NASA Astrophysics Data System (ADS)

    Cucchetti, E.; Eckart, M. E.; Peille, P.; Porter, F. S.; Pajot, F.; Pointecouteau, E.

    2018-04-01

    With its array of 3840 Transition Edge Sensors (TESs), the Athena X-ray Integral Field Unit (X-IFU) will provide spatially resolved high-resolution spectroscopy (2.5 eV up to 7 keV) from 0.2 to 12 keV, with an absolute energy scale accuracy of 0.4 eV. Slight changes in the TES operating environment can cause significant variations in its energy response function, which may result in systematic errors in the absolute energy scale. We plan to monitor such changes at pixel level via onboard X-ray calibration sources and correct the energy scale accordingly using a linear or quadratic interpolation of gain curves obtained during ground calibration. However, this may not be sufficient to meet the 0.4 eV accuracy required for the X-IFU. In this contribution, we introduce a new two-parameter gain correction technique, based on both the pulse-height estimate of a fiducial line and the baseline value of the pixels. Using gain functions that simulate ground calibration data, we show that this technique can accurately correct deviations in detector gain due to changes in TES operating conditions such as heat sink temperature, bias voltage, thermal radiation loading and linear amplifier gain. We also address potential optimisations of the onboard calibration source and compare the performance of this new technique with those previously used.

  13. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  14. A new powerful parameterization tool for managing groundwater resources and predicting land subsidence in Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Nunes, V. D.; Burbey, T. J.; Borggaard, J.

    2012-12-01

    More than 1.5 m of subsidence has been observed in Las Vegas Valley since 1935 as a result of groundwater pumping that commenced in 1905 (Bell, 2002). The compaction of the aquifer system has led to several large subsidence bowls and deleterious earth fissures. The highly heterogeneous aquifer system with its variably thick interbeds makes predicting the magnitude and location of subsidence extremely difficult. Several numerical groundwater flow models of the Las Vegas basin have been previously developed; however none of them have been able to accurately simulate the observed subsidence patterns or magnitudes because of inadequate parameterization. To better manage groundwater resources and predict future subsidence we have updated and developed a more accurate groundwater management model for Las Vegas Valley by developing a new adjoint parameter estimation package (APE) that is used in conjunction with UCODE along with MODFLOW and the SUB (subsidence) and HFB (horizontal flow barrier) packages. The APE package is used with UCODE to automatically identify suitable parameter zonations and inversely calculate parameter values from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Ske) and inelastic (Skv) storage coefficients. With the advent of InSAR (Interferometric synthetic aperture radar), distributed spatial and temporal subsidence measurements can be obtained, which greatly enhance the accuracy of parameter estimation. This automation process can remove user bias and provide a far more accurate and robust parameter zonation distribution. The outcome of this work yields a more accurate and powerful tool for managing groundwater resources in Las Vegas Valley to date.

  15. Analysis of several methods and inertial sensors locations to assess gait parameters in able-bodied subjects.

    PubMed

    Ben Mansour, Khaireddine; Rezzoug, Nasser; Gorce, Philippe

    2015-10-01

    The purpose of this paper was to determine which types of inertial sensors and which advocated locations should be used for reliable and accurate gait event detection and temporal parameter assessment in normal adults. In addition, we aimed to remove the ambiguity found in the literature of the definition of the initial contact (IC) from the lumbar accelerometer. Acceleration and angular velocity data was gathered from the lumbar region and the distal edge of each shank. This data was evaluated in comparison to an instrumented treadmill and an optoelectronic system during five treadmill speed sessions. The lumbar accelerometer showed that the peak of the anteroposterior component was the most accurate for IC detection. Similarly, the valley that followed the peak of the vertical component was the most precise for terminal contact (TC) detection. Results based on ANOVA and Tukey tests showed that the set of inertial methods was suitable for temporal gait assessment and gait event detection in able-bodied subjects. For gait event detection, an exception was found with the shank accelerometer. The tool was suitable for temporal parameters assessment, despite the high root mean square error on the detection of IC (RMSEIC) and TC (RMSETC). The shank gyroscope was found to be as accurate as the kinematic method since the statistical tests revealed no significant difference between the two techniques for the RMSE off all gait events and temporal parameters. The lumbar and shank accelerometers were the most accurate alternative to the shank gyroscope for gait event detection and temporal parameters assessment, respectively. Copyright © 2015. Published by Elsevier B.V.

  16. Hyoid bone development: An assessment of optimal CT scanner parameters and 3D volume rendering techniques

    PubMed Central

    Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.

    2015-01-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349

  17. Hyoid Bone Development: An Assessment Of Optimal CT Scanner Parameters and Three-Dimensional Volume Rendering Techniques.

    PubMed

    Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K

    2015-08-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.

  18. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series

    PubMed Central

    Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe

    2017-01-01

    Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709

  20. FECAL POLLUTION, PUBLIC HEALTH AND MICROBIAL SOURCE TRACKING

    EPA Science Inventory

    Microbial source tracking (MST) seeks to provide information about sources of fecal water contamination. Without knowledge of sources, it is difficult to accurately model risk assessments, choose effective remediation strategies, or bring chronically polluted waters into complian...

  1. Evaluating radiative transfer schemes treatment of vegetation canopy architecture in land surface models

    NASA Astrophysics Data System (ADS)

    Braghiere, Renato; Quaife, Tristan; Black, Emily

    2016-04-01

    Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical of model comparisons with in-situ observations. The structure factor parameters were obtained for each canopy structure through the inversion against direct and diffuse fraction of absorbed photosynthetically active radiation (fAPAR), and albedo PAR. Overall, the modified two-stream approximation consistently showed a good agreement with the RAMI4PILPS reference values under direct and diffuse illumination conditions. It is an efficient and accurate tool to derive PAR absorptance and reflectance for scenarios with different canopy densities, leaf densities and soil background albedos, with especial attention to brighter backgrounds, i.e., snowy. The major difficulty of its applicability in the real world is to acquire the parameterisation parameters from in-situ observations. The derivation of parameters from Digital Hemispherical Photographs (DHP) is highly promising at forest stands scales. DHP provide a permanent record and are a valuable information source for position, size, density, and distribution of canopy gaps. The modified two-stream approximation parameters were derived from gap probability data extracted from DHP obtained in a woody savannah in California, USA. Values of fAPAR and albedo PAR were evaluated against a tree-based vegetation canopy model, MAESPA, which used airborne LiDAR data to define the individual-tree locations, and extract structural information such as tree height and crown diameter. The parameterisation improved the performance of a two-stream approximation by making it achieves comparable results to complex 3D model calculations under observed conditions.

  2. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    NASA Astrophysics Data System (ADS)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early stage of reaction.

  3. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  4. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  5. Investigation of the Surface Stress in SiC and Diamond Nanocrystals by In-situ High Pressure Powder Diffraction Technique

    NASA Technical Reports Server (NTRS)

    Palosz, B.; Stelmakh, S.; Grzanka, E.; Gierlotka, S.; Zhao, Y.; Palosz, W.

    2003-01-01

    The real atomic structure of nanocrystals determines key properties of the materials. For such materials the serious experimental problem lies in obtaining sufficiently accurate measurements of the structural parameters of the crystals, since very small crystals constitute rather a two-phase than a uniform crystallographic phase system. As a result, elastic properties of nanograins may be expected to reflect a dual nature of their structure, with a corresponding set of different elastic property parameters. We studied those properties by in-situ high-pressure powder diffraction technique. For nanocrystalline, even one-phase materials such measurements are particularly difficult to make since determination of the lattice parameters of very small crystals presents a challenge due to inherent limitations of standard elaboration of powder diffractograms. In this investigation we used our methodology of the structural analysis, the 'apparent lattice parameter' (alp) concept. The methodology allowed us to avoid the traps (if applied to nanocrystals) of standard powder diffraction evaluation techniques. The experiments were performed for nanocrystalline Sic and GaN powders using synchrotron sources. We applied both hydrostatic and isostatic pressures in the range of up to 40 GPa. Elastic properties of the samples were examined based on the measurements of a change of the lattice parameters with pressure. The results show a dual nature of the mechanical properties (compressibilities) of the materials, indicating a complex, core-shell structure of the grains.

  6. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors.

    PubMed

    Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L

    2010-04-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.

  7. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  8. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  9. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  10. Flowing Hot or Cold: User-Friendly Computational Models of Terrestrial and Planetary Lava Channels and Lakes

    NASA Astrophysics Data System (ADS)

    Sakimoto, S. E. H.

    2016-12-01

    Planetary volcanism has redefined what is considered volcanism. "Magma" now may be considered to be anything from the molten rock familiar at terrestrial volcanoes to cryovolcanic ammonia-water mixes erupted on an outer solar system moon. However, even with unfamiliar compositions and source mechanisms, we find familiar landforms such as volcanic channels, lakes, flows, and domes and thus a multitude of possibilities for modeling. As on Earth, these landforms lend themselves to analysis for estimating storage, eruption and/or flow rates. This has potential pitfalls, as extension of the simplified analytic models we often use for terrestrial features into unfamiliar parameter space might yield misleading results. Our most commonly used tools for estimating flow and cooling have tended to lag significantly behind state-of-the-art; the easiest methods to use are neither realistic or accurate, but the more realistic and accurate computational methods are not simple to use. Since the latter computational tools tend to be both expensive and require a significant learning curve, there is a need for a user-friendly approach that still takes advantage of their accuracy. One method is use of the computational package for generation of a server-based tool that allows less computationally inclined users to get accurate results over their range of input parameters for a given problem geometry. A second method is to use the computational package for the generation of a polynomial empirical solution for each class of flow geometry that can be fairly easily solved by anyone with a spreadsheet. In this study, we demonstrate both approaches for several channel flow and lava lake geometries with terrestrial and extraterrestrial examples and compare their results. Specifically, we model cooling rectangular channel flow with a yield strength material, with applications to Mauna Loa, Kilauea, Venus, and Mars. This approach also shows promise with model applications to lava lakes, magma flow through cracks, and volcanic dome formation.

  11. Spectral estimation of received phase in the presence of amplitude scintillation

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1988-01-01

    A technique is demonstrated for obtaining the spectral parameters of the received carrier phase in the presence of carrier amplitude scintillation, by means of a digital phased locked loop. Since the random amplitude fluctuations generate time-varying loop characteristics, straightforward processing of the phase detector output does not provide accurate results. The method developed here performs a time-varying inverse filtering operation on the corrupted observables, thus recovering the original phase process and enabling accurate estimation of its underlying parameters.

  12. Fundamental Parameters of Main-Sequence Stars in an Instant with Machine Learning

    NASA Astrophysics Data System (ADS)

    Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabeth

    2016-10-01

    Owing to the remarkable photometric precision of space observatories like Kepler, stellar and planetary systems beyond our own are now being characterized en masse for the first time. These characterizations are pivotal for endeavors such as searching for Earth-like planets and solar twins, understanding the mechanisms that govern stellar evolution, and tracing the dynamics of our Galaxy. The volume of data that is becoming available, however, brings with it the need to process this information accurately and rapidly. While existing methods can constrain fundamental stellar parameters such as ages, masses, and radii from these observations, they require substantial computational effort to do so. We develop a method based on machine learning for rapidly estimating fundamental parameters of main-sequence solar-like stars from classical and asteroseismic observations. We first demonstrate this method on a hare-and-hound exercise and then apply it to the Sun, 16 Cyg A and B, and 34 planet-hosting candidates that have been observed by the Kepler spacecraft. We find that our estimates and their associated uncertainties are comparable to the results of other methods, but with the additional benefit of being able to explore many more stellar parameters while using much less computation time. We furthermore use this method to present evidence for an empirical diffusion-mass relation. Our method is open source and freely available for the community to use.6

  13. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  14. Information spreading by a combination of MEG source estimation and multivariate pattern classification.

    PubMed

    Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.

  15. Information spreading by a combination of MEG source estimation and multivariate pattern classification

    PubMed Central

    Sato, Masashi; Yamashita, Okito; Sato, Masa-aki

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968

  16. Severe Weather Environments in Atmospheric Reanalyses

    NASA Astrophysics Data System (ADS)

    King, A. T.; Kennedy, A. D.

    2017-12-01

    Atmospheric reanalyses combine historical observation data using a fixed assimilation scheme to achieve a dynamically coherent representation of the atmosphere. How well these reanalyses represent severe weather environments via proxies is poorly defined. To quantify the performance of reanalyses, a database of proximity soundings near severe storms from the Rapid Update Cycle 2 (RUC-2) model will be compared to a suite of reanalyses including: North American Reanalysis (NARR), European Interim Reanalysis (ERA-Interim), 2nd Modern-Era Retrospective Reanalysis for Research and Applications (MERRA-2), Japanese 55-year Reanalysis (JRA-55), 20th Century Reanalysis (20CR), and Climate Forecast System Reanalysis (CFSR). A variety of severe weather parameters will be calculated from these soundings including: convective available potential energy (CAPE), storm relative helicity (SRH), supercell composite parameter (SCP), and significant tornado parameter (STP). These soundings will be generated using the SHARPpy python module, which is an open source tool used to calculate severe weather parameters. Preliminary results indicate that the NARR and JRA55 are significantly more skilled at producing accurate severe weather environments than the other reanalyses. The primary difference between these two reanalyses and the remaining reanalyses is a significant negative bias for thermodynamic parameters. To facilitate climatological studies, the scope of work will be expanded to compute these parameters for the entire domain and duration of select renalyses. Preliminary results from this effort will be presented and compared to observations at select locations. This dataset will be made pubically available to the larger scientific community, and details of this product will be provided.

  17. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Locating People Diagnosed With HIV for Public Health Action: Utility of HIV Case Surveillance and Other Data Sources.

    PubMed

    Padilla, Mabel; Mattson, Christine L; Scheer, Susan; Udeagu, Chi-Chi N; Buskin, Susan E; Hughes, Alison J; Jaenicke, Thomas; Wohl, Amy Rock; Prejean, Joseph; Wei, Stanley C

    Human immunodeficiency virus (HIV) case surveillance and other health care databases are increasingly being used for public health action, which has the potential to optimize the health outcomes of people living with HIV (PLWH). However, often PLWH cannot be located based on the contact information available in these data sources. We assessed the accuracy of contact information for PLWH in HIV case surveillance and additional data sources and whether time since diagnosis was associated with accurate contact information in HIV case surveillance and successful contact. The Case Surveillance-Based Sampling (CSBS) project was a pilot HIV surveillance system that selected a random population-based sample of people diagnosed with HIV from HIV case surveillance registries in 5 state and metropolitan areas. From November 2012 through June 2014, CSBS staff members attempted to locate and interview 1800 sampled people and used 22 data sources to search for contact information. Among 1063 contacted PLWH, HIV case surveillance data provided accurate telephone number, address, or HIV care facility information for 239 (22%), 412 (39%), and 827 (78%) sampled people, respectively. CSBS staff members used additional data sources, such as support services and commercial people-search databases, to locate and contact PLWH with insufficient contact information in HIV case surveillance. PLWH diagnosed <1 year ago were more likely to have accurate contact information in HIV case surveillance than were PLWH diagnosed ≥1 year ago ( P = .002), and the benefit from using additional data sources was greater for PLWH with more longstanding HIV infection ( P < .001). When HIV case surveillance cannot provide accurate contact information, health departments can prioritize searching additional data sources, especially for people with more longstanding HIV infection.

  19. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  20. A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.

    PubMed

    Pandis, Petros; Bull, Anthony Mj

    2017-11-01

    Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.

  1. Chaos synchronization and Nelder-Mead search for parameter estimation in nonlinear pharmacological systems: Estimating tumor antigenicity in a model of immunotherapy.

    PubMed

    Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel

    2018-06-19

    In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  3. Accuracy of quantum sensors measuring yield photon flux and photosynthetic photon flux

    NASA Technical Reports Server (NTRS)

    Barnes, C.; Tibbitts, T.; Sager, J.; Deitzer, G.; Bubenheim, D.; Koerner, G.; Bugbee, B.; Knott, W. M. (Principal Investigator)

    1993-01-01

    Photosynthesis is fundamentally driven by photon flux rather than energy flux, but not all absorbed photons yield equal amounts of photosynthesis. Thus, two measures of photosynthetically active radiation have emerged: photosynthetic photon flux (PPF), which values all photons from 400 to 700 nm equally, and yield photon flux (YPF), which weights photons in the range from 360 to 760 nm according to plant photosynthetic response. We selected seven common radiation sources and measured YPF and PPF from each source with a spectroradiometer. We then compared these measurements with measurements from three quantum sensors designed to measure YPF, and from six quantum sensors designed to measure PPF. There were few differences among sensors within a group (usually <5%), but YPF values from sensors were consistently lower (3% to 20%) than YPF values calculated from spectroradiometric measurements. Quantum sensor measurements of PPF also were consistently lower than PPF values calculated from spectroradiometric measurements, but the differences were <7% for all sources, except red-light-emitting diodes. The sensors were most accurate for broad-band sources and least accurate for narrow-band sources. According to spectroradiometric measurements, YPF sensors were significantly less accurate (>9% difference) than PPF sensors under metal halide, high-pressure sodium, and low-pressure sodium lamps. Both sensor types were inaccurate (>18% error) under red-light-emitting diodes. Because both YPF and PPF sensors are imperfect integrators, and because spectroradiometers can measure photosynthetically active radiation much more accurately, researchers should consider developing calibration factors from spectroradiometric data for some specific radiation sources to improve the accuracy of integrating sensors.

  4. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  5. A methodological approach to a realistic evaluation of skin absorbed doses during manipulation of radioactive sources by means of GAMOS Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio

    2018-05-01

    The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.

  6. Polarization effects on hard target calibration of lidar systems

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J.

    1987-01-01

    The theory of hard target calibration of lidar backscatter data, including laboratory measurements of the pertinent target reflectance parameters, is extended to include the effects of polarization of the transmitted and received laser radiation. The bidirectional reflectance-distribution function model of reflectance is expanded to a 4 x 4 matrix allowing Mueller matrix and Stokes vector calculus to be employed. Target reflectance parameters for calibration of lidar backscatter data are derived for various lidar system polarization configurations from integrating sphere and monostatic reflectometer measurements. It is found that correct modeling of polarization effects is mandatory for accurate calibration of hard target reflectance parameters and, therefore, for accurate calibration of lidar backscatter data.

  7. Class Enumeration and Parameter Recovery of Growth Mixture Modeling and Second-Order Growth Mixture Modeling in the Presence of Measurement Noninvariance between Latent Classes

    PubMed Central

    Kim, Eun Sook; Wang, Yan

    2017-01-01

    Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691

  8. Waveform Retrieval and Phase Identification for Seismic Data from the CASS Experiment

    NASA Astrophysics Data System (ADS)

    Li, Zhiwei; You, Qingyu; Ni, Sidao; Hao, Tianyao; Wang, Hongti; Zhuang, Cantao

    2013-05-01

    The little destruction to the deployment site and high repeatability of the Controlled Accurate Seismic Source (CASS) shows its potential for investigating seismic wave velocities in the Earth's crust. However, the difficulty in retrieving impulsive seismic waveforms from the CASS data and identifying the seismic phases substantially prevents its wide applications. For example, identification of the seismic phases and accurate measurement of travel times are essential for resolving the spatial distribution of seismic velocities in the crust. Until now, it still remains a challenging task to estimate the accurate travel times of different seismic phases from the CASS data which features extended wave trains, unlike processing of the waveforms from impulsive events such as earthquakes or explosive sources. In this study, we introduce a time-frequency analysis method to process the CASS data, and try to retrieve the seismic waveforms and identify the major seismic phases traveling through the crust. We adopt the Wigner-Ville Distribution (WVD) approach which has been used in signal detection and parameter estimation for linear frequency modulation (LFM) signals, and proves to feature the best time-frequency convergence capability. The Wigner-Hough transform (WHT) is applied to retrieve the impulsive waveforms from multi-component LFM signals, which comprise seismic phases with different arrival times. We processed the seismic data of the 40-ton CASS in the field experiment around the Xinfengjiang reservoir with the WVD and WHT methods. The results demonstrate that these methods are effective in waveform retrieval and phase identification, especially for high frequency seismic phases such as PmP and SmS with strong amplitudes in large epicenter distance of 80-120 km. Further studies are still needed to improve the accuracy on travel time estimation, so as to further promote applicability of the CASS for and imaging the seismic velocity structure.

  9. Effective Radius of Ice Cloud Particle Populations Derived from Aircraft Probes

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Schmitt, Carl; Bansemer, Aaron; vanZadelhoff, Gerd-Jan; McGill, Matthew J.; Twohy, Cynthia

    2005-01-01

    The effective radius(r(sub e)) is a crucial variable in representing the radiative properties of cloud layers in general circulation models. This parameter is proportional to the condensed water content (CWC) divided by the extinction (sigma). For ice cloud layers, parameterizations for r(sub e), have been developed from aircraft in-situ measurements 1) indirectly, using data obtained from particle spectrometer probes and assumptions or observations about particle shape and mass to get the ice water content (IWC) and area to get sigma, and recently 2) from probes that measure IWC and sigma directly. This study compares [IWC/sigma] derived from the two methods using data sets acquired from comparable instruments on two aircraft, one sampling clouds at mid-levels and the other at upper-levels during the CRYSTAL-FACE field program in Florida in 2002. The sigma and IWC derived by each method are compared and evaluated in different ways for each aircraft data set. Direct measurements of sigma exceed those derived indirectly by a factor of two to two and a half. The IWC probes, relying on ice sublimation, appear to measure accurately except when the IWC is high or the particles too large to sublimate completely during the short transit time through the probe. The IWC estimated from the particle probes are accurate when direct measurements are available to provide constraints and useful information in high IWC/large particle situations. Because of the discrepancy in sigma estimates between the direct and indirect approaches, there is a factor of 2 to 3 difference in [IWC/sigma] between them. Although there are significant uncertainties involved in its use, comparisons with several independent data sources suggest that the indirect method is the more accurate of the two approaches. However, experiments are needed to resolve the source of the discrepancy in sigma.

  10. Tracking antibiotic resistance gene pollution from different sources using machine-learning classification.

    PubMed

    Li, Li-Guan; Yin, Xiaole; Zhang, Tong

    2018-05-24

    Antimicrobial resistance (AMR) has been a worldwide public health concern. Current widespread AMR pollution has posed a big challenge in accurately disentangling source-sink relationship, which has been further confounded by point and non-point sources, as well as endogenous and exogenous cross-reactivity under complicated environmental conditions. Because of insufficient capability in identifying source-sink relationship within a quantitative framework, traditional antibiotic resistance gene (ARG) signatures-based source-tracking methods would hardly be a practical solution. By combining broad-spectrum ARG profiling with machine-learning classification SourceTracker, here we present a novel way to address the question in the era of high-throughput sequencing. Its potential in extensive application was firstly validated by 656 global-scale samples covering diverse environmental types (e.g., human/animal gut, wastewater, soil, ocean) and broad geographical regions (e.g., China, USA, Europe, Peru). Its potential and limitations in source prediction as well as effect of parameter adjustment were then rigorously evaluated by artificial configurations with representative source proportions. When applying SourceTracker in region-specific analysis, excellent performance was achieved by ARG profiles in two sample types with obvious different source compositions, i.e., influent and effluent of wastewater treatment plant. Two environmental metagenomic datasets of anthropogenic interference gradient further supported its potential in practical application. To complement general-profile-based source tracking in distinguishing continuous gradient pollution, a few generalist and specialist indicator ARGs across ecotypes were identified in this study. We demonstrated for the first time that the developed source-tracking platform when coupling with proper experiment design and efficient metagenomic analysis tools will have significant implications for assessing AMR pollution. Following predicted source contribution status, risk ranking of different sources in ARG dissemination will be possible, thereby paving the way for establishing priority in mitigating ARG spread and designing effective control strategies.

  11. Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring

    NASA Astrophysics Data System (ADS)

    Yin, J.; Ng, R.; Nakata, N.

    2017-12-01

    Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.

  12. Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.

    PubMed

    Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi

    2018-05-28

    Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.

  13. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    NASA Astrophysics Data System (ADS)

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  14. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Gair, Jonathan R.

    2014-12-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  15. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial

    PubMed Central

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2016-01-01

    Introduction Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. PMID:23394741

  16. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial.

    PubMed

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-07-01

    Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p<0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p<0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Biological reduction of chlorinated solvents: Batch-scale geochemical modeling

    NASA Astrophysics Data System (ADS)

    Kouznetsova, Irina; Mao, Xiaomin; Robinson, Clare; Barry, D. A.; Gerhard, Jason I.; McCarty, Perry L.

    2010-09-01

    Simulation of biodegradation of chlorinated solvents in dense non-aqueous phase liquid (DNAPL) source zones requires a model that accounts for the complexity of processes involved and that is consistent with available laboratory studies. This paper describes such a comprehensive modeling framework that includes microbially mediated degradation processes, microbial population growth and decay, geochemical reactions, as well as interphase mass transfer processes such as DNAPL dissolution, gas formation and mineral precipitation/dissolution. All these processes can be in equilibrium or kinetically controlled. A batch modeling example was presented where the degradation of trichloroethene (TCE) and its byproducts and concomitant reactions (e.g., electron donor fermentation, sulfate reduction, pH buffering by calcite dissolution) were simulated. Local and global sensitivity analysis techniques were applied to delineate the dominant model parameters and processes. Sensitivity analysis indicated that accurate values for parameters related to dichloroethene (DCE) and vinyl chloride (VC) degradation (i.e., DCE and VC maximum utilization rates, yield due to DCE utilization, decay rate for DCE/VC dechlorinators) are important for prediction of the overall dechlorination time. These parameters influence the maximum growth rate of the DCE and VC dechlorinating microorganisms and, thus, the time required for a small initial population to reach a sufficient concentration to significantly affect the overall rate of dechlorination. Self-inhibition of chlorinated ethenes at high concentrations and natural buffering provided by the sediment were also shown to significantly influence the dechlorination time. Furthermore, the analysis indicated that the rates of the competing, nonchlorinated electron-accepting processes relative to the dechlorination kinetics also affect the overall dechlorination time. Results demonstrated that the model developed is a flexible research tool that is able to provide valuable insight into the fundamental processes and their complex interactions during bioremediation of chlorinated ethenes in DNAPL source zones.

  18. Exploiting Aura OMI Level 2 Data with High Resolution Visualization

    NASA Astrophysics Data System (ADS)

    Wei, J. C.; Yang, W.; Johnson, J. E.; Zhao, P.; Gerasimov, I. V.; Pham, L.; Vicente, G. A.; Shen, S.

    2014-12-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme event (such as volcano eruption, dust storm, …etc) interpretation from satellite. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. One way to help users better understand the satellite data is to provide data along with 'Images', including accurate pixel-level (Level 2) information, pixel coverage area delineation, and science team recommended quality screening for individual geophysical parameters. Goddard Earth Sciences Data and Information Services Center (GES DISC) always strives to best support (i.e., Software-as-a-service, SaaS) the user-community for NASA Earth Science Data. In this case, we will present a new visualization tool that helps users exploiting Aura Ozone Monitoring Instrument (OMI) Level 2 data. This new visualization service utilizes Open Geospatial Consortium (OGC) standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls in the backend infrastructure. The functionality of the service allows users to select data sources (e.g., multiple parameters under the same measurement, like NO2 and SO2 from OMI Level 2 or same parameter with different methods of aggregation, like NO2 in OMNO2G and OMNO2D products), defining area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting and reformatting. The interface will also be able to connect to other OGC WMS and WCS servers, which will greatly enhance its expandability to integrate additional outside data/map sources (such as Global Imagery Browse Services (GIBS)).

  19. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  20. A dispersive treatment of decays

    NASA Astrophysics Data System (ADS)

    Stoffer, Peter; Colangelo, Gilberto; Passemar, Emilie

    2017-01-01

    decays have several features of interest: they allow an accurate measurement of ππ-scattering lengths; the decay is the best source for the determination of some low-energy constants of chiral perturbation theory (χPT) one form factor of the decay is connected to the chiral anomaly. We present the results of our dispersive analysis of decays, which provides a resummation of ππ- and Kπ-rescattering effects. The free parameters of the dispersion relation are fitted to the data of the high-statistics experiments E865 and NA48/2. By matching to χPT at NLO and NNLO, we determine the low-energy constants and . In contrast to a pure chiral treatment, the dispersion relation describes the observed curvature of one of the form factors, which we understand as an effect of rescattering beyond NNLO.

Top