Sample records for accurate source parameters

  1. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  2. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  3. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the

  4. Determination of Destress Blasting Effectiveness Using Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Wojtecki, Łukasz; Mendecki, Maciej J.; Zuberek, Wacaław M.

    2017-12-01

    Underground mining of coal seams in the Upper Silesian Coal Basin is currently performed under difficult geological and mining conditions. The mining depth, dislocations (faults and folds) and mining remnants are responsible for rockburst hazard in the highest degree. This hazard can be minimized by using active rockburst prevention, where destress blastings play an important role. Destress blastings in coal seams aim to destress the local stress concentrations. These blastings are usually performed from the longwall face to decrease the stress level ahead of the longwall. An accurate estimation of active rockburst prevention effectiveness is important during mining under disadvantageous geological and mining conditions, which affect the risk of rockburst. Seismic source parameters characterize the focus of tremor, which may be useful in estimating the destress blasting effects. Investigated destress blastings were performed in coal seam no. 507 during its longwall mining in one of the coal mines in the Upper Silesian Coal Basin under difficult geological and mining conditions. The seismic source parameters of the provoked tremors were calculated. The presented preliminary investigations enable a rapid estimation of the destress blasting effectiveness using seismic source parameters, but further analysis in other geological and mining conditions with other blasting parameters is required.

  5. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.

    PubMed

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-01

    %. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.

  6. SENSITIVITY OF STRUCTURAL RESPONSE TO GROUND MOTION SOURCE AND SITE PARAMETERS.

    USGS Publications Warehouse

    Safak, Erdal; Brebbia, C.A.; Cakmak, A.S.; Abdel Ghaffar, A.M.

    1985-01-01

    Designing structures to withstand earthquakes requires an accurate estimation of the expected ground motion. While engineers use the peak ground acceleration (PGA) to model the strong ground motion, seismologists use physical characteristics of the source and the rupture mechanism, such as fault length, stress drop, shear wave velocity, seismic moment, distance, and attenuation. This study presents a method for calculating response spectra from seismological models using random vibration theory. It then investigates the effect of various source and site parameters on peak response. Calculations are based on a nonstationary stochastic ground motion model, which can incorporate all the parameters both in frequency and time domains. The estimation of the peak response accounts for the effects of the non-stationarity, bandwidth and peak correlations of the response.

  7. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    PubMed Central

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  8. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  9. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  10. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the

  11. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the

  12. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  13. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  14. Effect of photon energy spectrum on dosimetric parameters of brachytherapy sources.

    PubMed

    Ghorbani, Mahdi; Mehrpouyan, Mohammad; Davenport, David; Ahmadi Moghaddas, Toktam

    2016-06-01

    The aim of this study is to quantify the influence of the photon energy spectrum of brachytherapy sources on task group No. 43 (TG-43) dosimetric parameters. Different photon spectra are used for a specific radionuclide in Monte Carlo simulations of brachytherapy sources. MCNPX code was used to simulate 125I, 103Pd, 169Yb, and 192Ir brachytherapy sources. Air kerma strength per activity, dose rate constant, radial dose function, and two dimensional (2D) anisotropy functions were calculated and isodose curves were plotted for three different photon energy spectra. The references for photon energy spectra were: published papers, Lawrence Berkeley National Laboratory (LBNL), and National Nuclear Data Center (NNDC). The data calculated by these photon energy spectra were compared. Dose rate constant values showed a maximum difference of 24.07% for 103Pd source with different photon energy spectra. Radial dose function values based on different spectra were relatively the same. 2D anisotropy function values showed minor differences in most of distances and angles. There was not any detectable difference between the isodose contours. Dosimetric parameters obtained with different photon spectra were relatively the same, however it is suggested that more accurate and updated photon energy spectra be used in Monte Carlo simulations. This would allow for calculation of reliable dosimetric data for source modeling and calculation in brachytherapy treatment planning systems.

  15. Effect of photon energy spectrum on dosimetric parameters of brachytherapy sources

    PubMed Central

    Ghorbani, Mahdi; Davenport, David

    2016-01-01

    Abstract Aim The aim of this study is to quantify the influence of the photon energy spectrum of brachytherapy sources on task group No. 43 (TG-43) dosimetric parameters. Background Different photon spectra are used for a specific radionuclide in Monte Carlo simulations of brachytherapy sources. Materials and methods MCNPX code was used to simulate 125I, 103Pd, 169Yb, and 192Ir brachytherapy sources. Air kerma strength per activity, dose rate constant, radial dose function, and two dimensional (2D) anisotropy functions were calculated and isodose curves were plotted for three different photon energy spectra. The references for photon energy spectra were: published papers, Lawrence Berkeley National Laboratory (LBNL), and National Nuclear Data Center (NNDC). The data calculated by these photon energy spectra were compared. Results Dose rate constant values showed a maximum difference of 24.07% for 103Pd source with different photon energy spectra. Radial dose function values based on different spectra were relatively the same. 2D anisotropy function values showed minor differences in most of distances and angles. There was not any detectable difference between the isodose contours. Conclusions Dosimetric parameters obtained with different photon spectra were relatively the same, however it is suggested that more accurate and updated photon energy spectra be used in Monte Carlo simulations. This would allow for calculation of reliable dosimetric data for source modeling and calculation in brachytherapy treatment planning systems. PMID:27247558

  16. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  17. Seismic source parameters of the induced seismicity at The Geysers geothermal area, California, by a generalized inversion approach

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia

    2017-04-01

    The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.

  18. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  19. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  20. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy

  1. Lithospheric Models of the Middle East to Improve Seismic Source Parameter Determination/Event Location Accuracy

    DTIC Science & Technology

    2012-09-01

    State Award Nos. DE-AC52-07NA27344/24.2.3.2 and DOS_SIAA-11-AVC/NMA-1 ABSTRACT The Middle East is a tectonically complex and seismically...active region. The ability to accurately locate earthquakes and other seismic events in this region is complicated by tectonics , the uneven...and seismic source parameters show that this activity comes from tectonic events. This work is informed by continuous or event-based regional

  2. Improved centroid moment tensor analyses in the NIED AQUA (Accurate and QUick Analysis system for source parameters)

    NASA Astrophysics Data System (ADS)

    Kimura, H.; Asano, Y.; Matsumoto, T.

    2012-12-01

    The rapid determination of hypocentral parameters and their transmission to the public are valuable components of disaster mitigation. We have operated an automatic system for this purpose—termed the Accurate and QUick Analysis system for source parameters (AQUA)—since 2005 (Matsumura et al., 2006). In this system, the initial hypocenter, the moment tensor (MT), and the centroid moment tensor (CMT) solutions are automatically determined and posted on the NIED Hi-net Web site (www.hinet.bosai.go.jp). This paper describes improvements made to the AQUA to overcome limitations that became apparent after the 2011 Tohoku Earthquake (05:46:17, March 11, 2011 in UTC). The improvements included the processing of NIED F-net velocity-type strong motion records, because NIED F-net broadband seismographs are saturated for great earthquakes such as the 2011 Tohoku Earthquake. These velocity-type strong motion seismographs provide unsaturated records not only for the 2011 Tohoku Earthquake, but also for recording stations located close to the epicenters of M>7 earthquakes. We used 0.005-0.020 Hz records for M>7.5 earthquakes, in contrast to the 0.01-0.05 Hz records employed in the original system. The initial hypocenters determined based on arrival times picked by using seismograms recorded by NIED Hi-net stations can have large errors in terms of magnitude and hypocenter location, especially for great earthquakes or earthquakes located far from the onland Hi-net network. The size of the 2011 Tohoku Earthquake was initially underestimated in the AQUA to be around M5 at the initial stage of rupture. Numerous aftershocks occurred at the outer rise east of the Japan trench, where a great earthquake is anticipated to occur. Hence, we modified the system to repeat the MT analyses assuming a larger size, for all earthquakes for which the magnitude was initially underestimated. We also broadened the search range of centroid depth for earthquakes located far from the onland Hi

  3. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various

  4. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  5. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  6. Investigating the value of passive microwave observations for monitoring volcanic eruption source parameters

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Cimini, Domenico; Marzano, Frank

    2016-04-01

    Volcanic eruptions inject both gas and solid particles into the Atmosphere. Solid particles are made by mineral fragments of different sizes (from few microns to meters), generally referred as tephra. Tephra from volcanic eruptions has enormous impacts on social and economical activities through the effects on the environment, climate, public health, and air traffic. The size, density and shape of a particle determine its fall velocity and thus residence time in the Atmosphere. Larger particles tend to fall quickly in the proximity of the volcano, while smaller particles may remain suspended for several days and thus may be transported by winds for thousands of km. Thus, the impact of such hazards involves local as well as large scales effects. Local effects involve mostly the large sized particles, while large scale effects are caused by the transport of the finest ejected tephra (ash) through the atmosphere. Forecasts of ash paths in the atmosphere are routinely run after eruptions using dispersion models. These models make use of meteorological and volcanic source parameters. The former are usually available as output of numerical weather prediction models or large scale reanalysis. Source parameters characterize the volcanic eruption near the vent; these are mainly the ash mass concentration along the vertical column and the top altitude of the volcanic plume, which is strictly related to the flux of the mass ejected at the emission source. These parameters should be known accurately and continuously; otherwise, strong hypothesis are usually needed, leading to large uncertainty in the dispersion forecasts. However, direct observations during an eruption are typically dangerous and impractical. Thus, satellite remote sensing is often exploited to monitor volcanic emissions, using visible (VIS) and infrared (IR) channels available on both Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO) satellites. VIS and IR satellite imagery are very useful to monitor

  7. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    PubMed

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  8. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  9. Estimation of Source and Attenuation Parameters from Ground Motion Observations for Induced Seismicity in Alberta

    NASA Astrophysics Data System (ADS)

    Novakovic, M.; Atkinson, G. M.

    2015-12-01

    We use a generalized inversion to solve for site response, regional source and attenuation parameters, in order to define a region-specific ground-motion prediction equation (GMPE) from ground motion observations in Alberta, following the method of Atkinson et al. (2015 BSSA). The database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at ~50 regional stations (distances from 30 to 500 km), over the last few years; almost all of the events have been identified as being induced by oil and gas activity. We remove magnitude scaling and geometric spreading functions from observed ground motions and invert for stress parameter, regional attenuation and site amplification. Resolving these parameters allows for the derivation of a regionally-calibrated GMPE that can be used to accurately predict amplitudes across the region in real time, which is useful for ground-motion-based alerting systems and traffic light protocols. The derived GMPE has further applications for the evaluation of hazards from induced seismicity.

  10. The Exponent of High-frequency Source Spectral Falloff and Contribution to Source Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Kiuchi, R.; Mori, J. J.

    2015-12-01

    As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.

  11. Volcanic eruption source parameters from active and passive microwave sensors

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi

    2016-04-01

    It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly

  12. Photochemical parameters of atmospheric source gases: accurate determination of OH reaction rate constants over atmospheric temperatures, UV and IR absorption spectra

    NASA Astrophysics Data System (ADS)

    Orkin, V. L.; Khamaganov, V. G.; Martynova, L. E.; Kurylo, M. J.

    2012-12-01

    The emissions of halogenated (Cl, Br containing) organics of both natural and anthropogenic origin contribute to the balance of and changes in the stratospheric ozone concentration. The associated chemical cycles are initiated by the photochemical decomposition of the portion of source gases that reaches the stratosphere. Reactions with hydroxyl radicals and photolysis are the main processes dictating the compound lifetime in the troposphere and release of active halogen in the stratosphere for a majority of halogen source gases. Therefore, the accuracy of photochemical data is of primary importance for the purpose of comprehensive atmospheric modeling and for simplified kinetic estimations of global impacts on the atmosphere, such as in ozone depletion (i.e., the Ozone Depletion Potential, ODP) and climate change (i.e., the Global Warming Potential, GWP). The sources of critically evaluated photochemical data for atmospheric modeling, NASA/JPL Publications and IUPAC Publications, recommend uncertainties within 10%-60% for the majority of OH reaction rate constants with only a few cases where uncertainties lie at the low end of this range. These uncertainties can be somewhat conservative because evaluations are based on the data from various laboratories obtained during the last few decades. Nevertheless, even the authors of the original experimental works rarely estimate the total combined uncertainties of the published OH reaction rate constants to be less than ca. 10%. Thus, uncertainties in the photochemical properties of potential and current atmospheric trace gases obtained under controlled laboratory conditions still may constitute a major source of uncertainty in estimating the compound's environmental impact. One of the purposes of the presentation is to illustrate the potential for obtaining accurate laboratory measurements of the OH reaction rate constant over the temperature range of atmospheric interest. A detailed inventory of accountable sources of

  13. Earthquake source parameters determined using the SAFOD Pilot Hole vertical seismic array

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Ellsworth, W. L.; Prejean, S. G.

    2003-12-01

    model of Sato and Hirasawa (1973). Estimated values of static stress drop were roughly 1 MPa and do not vary with seismic moment. Q values from all earthquakes were averaged at each level of the array. Average Qp and Qs range from 250 to 350 and from 300 to 400 between the top and bottom of the array, respectively. Increasing Q values as a function of depth explain well the observed decrease in high-frequency content as waves propagate toward the surface. Thus, by jointly analyzing the entire vertical array we can both accurately determine source parameters of microearthquakes and make reliable Q estimates while suppressing the trade-off between fc and Q.

  14. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  15. A rapid and accurate method, ventilated chamber C-history method, of measuring the emission characteristic parameters of formaldehyde/VOCs in building materials.

    PubMed

    Huang, Shaodan; Xiong, Jianyin; Zhang, Yinping

    2013-10-15

    The indoor pollution caused by formaldehyde and volatile organic compounds (VOCs) emitted from building materials poses an adverse effect on people's health. It is necessary to understand and control the behaviors of the emission sources. Based on detailed mass transfer analysis on the emission process in a ventilated chamber, this paper proposes a novel method of measuring the three emission characteristic parameters, i.e., the initial emittable concentration, the diffusion coefficient and the partition coefficient. A linear correlation between the logarithm of dimensionless concentration and time is derived. The three parameters can then be calculated from the intercept and slope of the correlation. Compared with the closed chamber C-history method, the test is performed under ventilated condition thus some commonly-used measurement instruments (e.g., GC/MS, HPLC) can be applied. While compared with other methods, the present method can rapidly and accurately measure the three parameters, with experimental time less than 12h and R(2) ranging from 0.96 to 0.99 for the cases studied. Independent experiment was carried out to validate the developed method, and good agreement was observed between the simulations based on the determined parameters and experiments. The present method should prove useful for quick characterization of formaldehyde/VOC emissions from indoor materials. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Estimating winter wheat phenological parameters: Implications for crop modeling

    USDA-ARS?s Scientific Manuscript database

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  17. Numerical investigation and electro-acoustic modeling of measurement methods for the in-duct acoustical source parameters.

    PubMed

    Jang, Seung-Ho; Ih, Jeong-Guon

    2003-02-01

    It is known that the direct method yields different results from the indirect (or load) method in measuring the in-duct acoustic source parameters of fluid machines. The load method usually comes up with a negative source resistance, although a fairly accurate prediction of radiated noise can be obtained from any method. This study is focused on the effect of the time-varying nature of fluid machines on the output results of two typical measurement methods. For this purpose, a simplified fluid machine consisting of a reservoir, a valve, and an exhaust pipe is considered as representing a typical periodic, time-varying system and the measurement situations are simulated by using the method of characteristics. The equivalent circuits for such simulations are also analyzed by considering the system as having a linear time-varying source. It is found that the results from the load method are quite sensitive to the change of cylinder pressure or valve profile, in contrast to those from the direct method. In the load method, the source admittance turns out to be predominantly dependent on the valve admittance at the calculation frequency as well as the valve and load admittances at other frequencies. In the direct method, however, the source resistance is always positive and the source admittance depends mainly upon the zeroth order of valve admittance.

  18. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  19. Line shape parameters of the 22-GHz water line for accurate modeling in atmospheric applications

    NASA Astrophysics Data System (ADS)

    Koshelev, M. A.; Golubiatnikov, G. Yu.; Vilkov, I. N.; Tretyakov, M. Yu.

    2018-01-01

    The paper concerns refining parameters of one of the major atmospheric diagnostic lines of water vapor at 22 GHz. Two high resolution microwave spectrometers based on different principles of operation covering together the pressure range from a few milliTorr up to a few Torr were used. Special efforts were made to minimize possible sources of systematic measurement errors. Satisfactory self-consistency of the obtained data was achieved ensuring reliability of the obtained parameters. Collisional broadening and shifting parameters of the line in pure water vapor and in its mixture with air were determined at room temperature. Comparative analysis of the obtained parameters with previous data is given. The speed dependence effect impact on the line shape was evaluated.

  20. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Prediction of broadband ground-motion time histories: Hybrid low/high-frequency method with correlated random source parameters

    USGS Publications Warehouse

    Liu, P.; Archuleta, R.J.; Hartzell, S.H.

    2006-01-01

    We present a new method for calculating broadband time histories of ground motion based on a hybrid low-frequency/high-frequency approach with correlated source parameters. Using a finite-difference method we calculate low- frequency synthetics (< ∼1 Hz) in a 3D velocity structure. We also compute broadband synthetics in a 1D velocity model using a frequency-wavenumber method. The low frequencies from the 3D calculation are combined with the high frequencies from the 1D calculation by using matched filtering at a crossover frequency of 1 Hz. The source description, common to both the 1D and 3D synthetics, is based on correlated random distributions for the slip amplitude, rupture velocity, and rise time on the fault. This source description allows for the specification of source parameters independent of any a priori inversion results. In our broadband modeling we include correlation between slip amplitude, rupture velocity, and rise time, as suggested by dynamic fault modeling. The method of using correlated random source parameters is flexible and can be easily modified to adjust to our changing understanding of earthquake ruptures. A realistic attenuation model is common to both the 3D and 1D calculations that form the low- and high-frequency components of the broadband synthetics. The value of Q is a function of the local shear-wave velocity. To produce more accurate high-frequency amplitudes and durations, the 1D synthetics are corrected with a randomized, frequency-dependent radiation pattern. The 1D synthetics are further corrected for local site and nonlinear soil effects by using a 1D nonlinear propagation code and generic velocity structure appropriate for the site’s National Earthquake Hazards Reduction Program (NEHRP) site classification. The entire procedure is validated by comparison with the 1994 Northridge, California, strong ground motion data set. The bias and error found here for response spectral acceleration are similar to the best results

  2. A Supervised Statistical Learning Approach for Accurate Legionella pneumophila Source Attribution during Outbreaks

    PubMed Central

    Buultjens, Andrew H.; Chua, Kyra Y. L.; Baines, Sarah L.; Kwong, Jason; Gao, Wei; Cutcher, Zoe; Adcock, Stuart; Ballard, Susan; Schultz, Mark B.; Tomita, Takehiro; Subasinghe, Nela; Carter, Glen P.; Pidot, Sacha J.; Franklin, Lucinda; Seemann, Torsten; Gonçalves Da Silva, Anders

    2017-01-01

    ABSTRACT Public health agencies are increasingly relying on genomics during Legionnaires' disease investigations. However, the causative bacterium (Legionella pneumophila) has an unusual population structure, with extreme temporal and spatial genome sequence conservation. Furthermore, Legionnaires' disease outbreaks can be caused by multiple L. pneumophila genotypes in a single source. These factors can confound cluster identification using standard phylogenomic methods. Here, we show that a statistical learning approach based on L. pneumophila core genome single nucleotide polymorphism (SNP) comparisons eliminates ambiguity for defining outbreak clusters and accurately predicts exposure sources for clinical cases. We illustrate the performance of our method by genome comparisons of 234 L. pneumophila isolates obtained from patients and cooling towers in Melbourne, Australia, between 1994 and 2014. This collection included one of the largest reported Legionnaires' disease outbreaks, which involved 125 cases at an aquarium. Using only sequence data from L. pneumophila cooling tower isolates and including all core genome variation, we built a multivariate model using discriminant analysis of principal components (DAPC) to find cooling tower-specific genomic signatures and then used it to predict the origin of clinical isolates. Model assignments were 93% congruent with epidemiological data, including the aquarium Legionnaires' disease outbreak and three other unrelated outbreak investigations. We applied the same approach to a recently described investigation of Legionnaires' disease within a UK hospital and observed a model predictive ability of 86%. We have developed a promising means to breach L. pneumophila genetic diversity extremes and provide objective source attribution data for outbreak investigations. IMPORTANCE Microbial outbreak investigations are moving to a paradigm where whole-genome sequencing and phylogenetic trees are used to support epidemiological

  3. TRIPPy: Python-based Trailed Source Photometry

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-05-01

    TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.

  4. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  5. Radiation Parameters of High Dose Rate Iridium -192 Sources

    NASA Astrophysics Data System (ADS)

    Podgorsak, Matthew B.

    A lack of physical data for high dose rate (HDR) Ir-192 sources has necessitated the use of basic radiation parameters measured with low dose rate (LDR) Ir-192 seeds and ribbons in HDR dosimetry calculations. A rigorous examination of the radiation parameters of several HDR Ir-192 sources has shown that this extension of physical data from LDR to HDR Ir-192 may be inaccurate. Uncertainty in any of the basic radiation parameters used in dosimetry calculations compromises the accuracy of the calculated dose distribution and the subsequent dose delivery. Dose errors of up to 0.3%, 6%, and 2% can result from the use of currently accepted values for the half-life, exposure rate constant, and dose buildup effect, respectively. Since an accuracy of 5% in the delivered dose is essential to prevent severe complications or tumor regrowth, the use of basic physical constants with uncertainties approaching 6% is unacceptable. A systematic evaluation of the pertinent radiation parameters contributes to a reduction in the overall uncertainty in HDR Ir-192 dose delivery. Moreover, the results of the studies described in this thesis contribute significantly to the establishment of standardized numerical values to be used in HDR Ir-192 dosimetry calculations.

  6. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  7. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    PubMed

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Analysis of glottal source parameters in Parkinsonian speech.

    PubMed

    Hanratty, Jane; Deegan, Catherine; Walsh, Mary; Kirkpatrick, Barry

    2016-08-01

    Diagnosis and monitoring of Parkinson's disease has a number of challenges as there is no definitive biomarker despite the broad range of symptoms. Research is ongoing to produce objective measures that can either diagnose Parkinson's or act as an objective decision support tool. Recent research on speech based measures have demonstrated promising results. This study aims to investigate the characteristics of the glottal source signal in Parkinsonian speech. An experiment is conducted in which a selection of glottal parameters are tested for their ability to discriminate between healthy and Parkinsonian speech. Results for each glottal parameter are presented for a database of 50 healthy speakers and a database of 16 speakers with Parkinsonian speech symptoms. Receiver operating characteristic (ROC) curves were employed to analyse the results and the area under the ROC curve (AUC) values were used to quantify the performance of each glottal parameter. The results indicate that glottal parameters can be used to discriminate between healthy and Parkinsonian speech, although results varied for each parameter tested. For the task of separating healthy and Parkinsonian speech, 2 out of the 7 glottal parameters tested produced AUC values of over 0.9.

  9. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

  10. How sensitive is earthquake ground motion to source parameters? Insights from a numerical study in the Mygdonian basin

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos

    2014-05-01

    Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics

  11. Shallow seismic source parameter determination using intermediate-period surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.

    2012-09-01

    Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can

  12. Eruptive Source Parameters from Near-Source Gravity Waves Induced by Large Vulcanian eruptions

    NASA Astrophysics Data System (ADS)

    Barfucci, Giulia; Ripepe, Maurizio; De Angelis, Silvio; Lacanna, Giorgio; Marchetti, Emanuele

    2016-04-01

    The sudden ejection of hot material from volcanic vent perturbs the atmosphere generating a broad spectrum of pressure oscillations from acoustic infrasound (<10 Hz) to gravity waves (<0.03 Hz). However observations of gravity waves excited by volcanic eruptions are still rare, mostly limited to large sub-plinian eruptions and frequently at large distance from the source (>100 km). Atmospheric Gravity waves are induced by perturbations of the hydrostatic equilibrium of the atmosphere and propagate within a medium with internal density stratification. They are initiated by mechanisms that cause the atmosphere to be displaced as for the injection of volcanic ash plume during an eruption. We use gravity waves to infer eruptive source parameters, such as mass eruption rate (MER) and duration of the eruption, which may be used as inputs in the volcanic ash transport and dispersion models. We present the analysis of near-field observations (<7 km) of atmospheric gravity waves, with frequencies of 0.97 and 1.15 mHz, recorded by a pressure sensors network during two explosions in July and December 2008 at Soufrière Hills Volcano, Montserrat. We show that gravity waves at Soufrière Hills Volcano originate above the volcanic dome and propagate with an apparent horizontal velocities of 8-10 m/s. Assuming a single mass injection point source model, we constrain the source location at ~3.5 km a.s.l., above the vent, duration of the gas thrust < 140 s and MERs of 2.6 and 5.4 x10E7 kg/s, for the two eruptive events. Source duration and MER derived by modeling Gravity Waves are fully compatible with others independent estimates from field observations. Our work strongly supports the use of gravity waves to model eruption source parameters and can have a strong impact on our ability to monitor volcanic eruption at a large distance and may have future application in assessing the relative magnitude of volcanic explosions.

  13. Post-blasting seismicity in Rudna copper mine, Poland - source parameters analysis.

    NASA Astrophysics Data System (ADS)

    Caputa, Alicja; Rudziński, Łukasz; Talaga, Adam

    2017-04-01

    The really important hazard in Polish copper mines is high seismicity and corresponding rockbursts. Many methods are used to reduce the seismic hazard. Among others the most effective is preventing blasting in potentially hazardous mining panels. The method is expected to provoke small moderate tremors (up to M2.0) and reduce in this way a stress accumulation in the rockmass. This work presents an analysis, which deals with post-blasting events in Rudna copper mine, Poland. Using the Full Moment Tensor (MT) inversion and seismic spectra analysis, we try to find some characteristic features of post blasting seismic sources. Source parameters estimated for post-blasting events are compared with the parameters of not-provoked mining events that occurred in the vicinity of the provoked sources. Our studies show that focal mechanisms of events which occurred after blasts have similar MT decompositions, namely are characterized by a quite strong isotropic component as compared with the isotropic component of not-provoked events. Also source parameters obtained from spectral analysis show that provoked seismicity has a specific source physics. Among others, it is visible from S to P wave energy ratio, which is higher for not-provoked events. The comparison of all our results reveals a three possible groups of sources: a) occurred just after blasts, b) occurred from 5min to 24h after blasts and c) not-provoked seismicity (more than 24h after blasting). Acknowledgements: This work was supported within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.

  14. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  15. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Orbital Parameters for Two "IGR" Sources

    NASA Astrophysics Data System (ADS)

    Thompson, Thomas; Tomsick, J.; Rothschild, R.; in't Zand, J.; Walter, R.

    2006-09-01

    With recent and archival Rossi X-ray Timing Explorer observations of the heavily absorbed X-ray pulsars IGR J17252-3616 (hereafter J17252) and IGR J16393-4643 (hereafter J16393), we carried out a pulse timing analysis to determine the orbital parameters of the two binary systems. We find that both INTEGRAL sources are High Mass X-ray Binary (HMXB) systems. The orbital solution to J17252 has a projected semi-major axis of 101 ± 3 lt-s and a period of 9.7403 ± 0.0004 days, implying a mass function of 11.7 ± 1.2 M_sun. The orbital solution to J16393, on the other hand, is not unambiguously known due to weaker and less-consistent pulsations. The most likely orbital solution has a projected semi-major axis of 43 ± 2 lt-s and an orbital period of 3.6875 ± 0.0006 days, yielding a mass function of 6.5 ± 1.1 M_sun. The orbits of both sources are consistent with circular, with e < 0.2-0.25 and the 90% confidence level. The orbital and pulse periods of each source place the systems in the region of the Corbet diagram populated by supergiant wind accretors. J17252 is an eclipsing binary system, and provides an exciting opportunity to obtain a neutron star mass measurement.

  17. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  18. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  19. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  20. Impact of various operating modes on performance and emission parameters of small heat source

    NASA Astrophysics Data System (ADS)

    Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef

    2016-06-01

    Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.

  1. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves.

    PubMed

    Ripepe, M; Barfucci, G; De Angelis, S; Delle Donne, D; Lacanna, G; Marchetti, E

    2016-11-10

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  2. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves

    PubMed Central

    Ripepe, M.; Barfucci, G.; De Angelis, S.; Delle Donne, D.; Lacanna, G.; Marchetti, E.

    2016-01-01

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models. PMID:27830768

  3. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  4. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and

  5. White LED compared with other light sources: age-dependent photobiological effects and parameters for evaluation.

    PubMed

    Rebec, Katja Malovrh; Klanjšek-Gunde, Marta; Bizjak, Grega; Kobav, Matej B

    2015-01-01

    Ergonomic science at work and living places should appraise human factors concerning the photobiological effects of lighting. Thorough knowledge on this subject has been gained in the past; however, few attempts have been made to propose suitable evaluation parameters. The blue light hazard and its influence on melatonin secretion in age-dependent observers is considered in this paper and parameters for its evaluation are proposed. New parameters were applied to analyse the effects of white light-emitting diode (LED) light sources and to compare them with the currently applied light sources. The photobiological effects of light sources with the same illuminance but different spectral power distribution were determined for healthy 4-76-year-old observers. The suitability of new parameters is discussed. Correlated colour temperature, the only parameter currently used to assess photobiological effects, is evaluated and compared to new parameters.

  6. Relating stick-slip friction experiments to earthquake source parameters

    USGS Publications Warehouse

    McGarr, Arthur F.

    2012-01-01

    Analytical results for parameters, such as static stress drop, for stick-slip friction experiments, with arbitrary input parameters, can be determined by solving an energy-balance equation. These results can then be related to a given earthquake based on its seismic moment and the maximum slip within its rupture zone, assuming that the rupture process entails the same physics as stick-slip friction. This analysis yields overshoots and ratios of apparent stress to static stress drop of about 0.25. The inferred earthquake source parameters static stress drop, apparent stress, slip rate, and radiated energy are robust inasmuch as they are largely independent of the experimental parameters used in their estimation. Instead, these earthquake parameters depend on C, the ratio of maximum slip to the cube root of the seismic moment. C is controlled by the normal stress applied to the rupture plane and the difference between the static and dynamic coefficients of friction. Estimating yield stress and seismic efficiency using the same procedure is only possible when the actual static and dynamic coefficients of friction are known within the earthquake rupture zone.

  7. Interpretation of Source Parameters from Total Gradient of Gravity and Magnetic Anomalies Caused by Thin Dyke using Nonlinear Global Optimization Technique

    NASA Astrophysics Data System (ADS)

    Biswas, A.

    2016-12-01

    A proficient way to deal with appraisal model parameters from total gradient of gravity and magnetic data in light of Very Fast Simulated Annealing (VFSA) has been exhibited. This is the first run through of applying VFSA in deciphering total gradient of potential field information with another detailing estimation brought on because of detached causative sources installed in the subsurface. The model parameters translated here are the amplitude coefficient (k), accurate origin of causative source (x0) depth (z0) and the shape factor (q). The outcome of VFSA improvement demonstrates that it can exceptionally decide all the model parameters when shape variable is fixed. The model parameters assessed by the present strategy, for the most part the shape and depth of the covered structures was observed to be in astounding concurrence with the genuine parameters. The technique has likewise the capability of dodging very uproarious information focuses and enhances the understanding results. Investigation of Histogram and cross-plot examination likewise proposes the translation inside the assessed ambiguity. Inversion of noise-free and noisy synthetic data information for single structures and field information shows the viability of the methodology. The procedure has been carefully and adequately connected to genuine field cases (Leona Anomaly, Senegal for gravity and Pima copper deposit, USA for magnetic) with the nearness of mineral bodies. The present technique can be to a great degree material for mineral investigation or ore bodies of dyke-like structure rooted in the shallow and more deep subsurface. The calculation time for the entire procedure is short.

  8. Calculated and measured brachytherapy dosimetry parameters in water for the Xoft Axxent X-Ray Source: an electronic brachytherapy source.

    PubMed

    Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve

    2006-11-01

    A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1source model radial dose functions were typically within 4% of calculated results. Calculated values for F(r, theta) for all operating voltages were within 15% of unity along the distal end (theta=0 degree), and ranged from F(1 cm, 160 degrees) = 0.2 to F(15 cm, 175 degrees) = 0.4 towards the catheter proximal end. For all three operating voltages using the PTW chamber, measured dependence of output as a function of azimuthal angle, psi, was typically on average +/-3% for 0 degree < or = psi < or = 360 degrees

  9. Laser diode absorption spectroscopy for accurate CO(2) line parameters at 2 microm: consequences for space-based DIAL measurements and potential biases.

    PubMed

    Joly, Lilian; Marnas, Fabien; Gibert, Fabien; Bruneau, Didier; Grouiez, Bruno; Flamant, Pierre H; Durry, Georges; Dumelie, Nicolas; Parvitte, Bertrand; Zéninari, Virginie

    2009-10-10

    Space-based active sensing of CO(2) concentration is a very promising technique for the derivation of CO(2) surface fluxes. There is a need for accurate spectroscopic parameters to enable accurate space-based measurements to address global climatic issues. New spectroscopic measurements using laser diode absorption spectroscopy are presented for the preselected R30 CO(2) absorption line ((20(0)1)(III)<--(000) band) and four others. The line strength, air-broadening halfwidth, and its temperature dependence have been investigated. The results exhibit significant improvement for the R30 CO(2) absorption line: 0.4% on the line strength, 0.15% on the air-broadening coefficient, and 0.45% on its temperature dependence. Analysis of potential biases of space-based DIAL CO(2) mixing ratio measurements associated to spectroscopic parameter uncertainties are presented.

  10. Dependence of the source performance on plasma parameters at the BATMAN test facility

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Fantz, U.

    2015-04-01

    The investigation of the dependence of the source performance (high jH-, low je) for optimum Cs conditions on the plasma parameters at the BATMAN (Bavarian Test MAchine for Negative hydrogen ions) test facility is desirable in order to find key parameters for the operation of the source as well as to deepen the physical understanding. The most relevant source physics takes place in the extended boundary layer, which is the plasma layer with a thickness of several cm in front of the plasma grid: the production of H-, its transport through the plasma and its extraction, inevitably accompanied by the co-extraction of electrons. Hence, a link of the source performance with the plasma parameters in the extended boundary layer is expected. In order to characterize electron and negative hydrogen ion fluxes in the extended boundary layer, Cavity Ring-Down Spectroscopy and Langmuir probes have been applied for the measurement of the H- density and the determination of the plasma density, the plasma potential and the electron temperature, respectively. The plasma potential is of particular importance as it determines the sheath potential profile at the plasma grid: depending on the plasma grid bias relative to the plasma potential, a transition in the plasma sheath from an electron repelling to an electron attracting sheath takes place, influencing strongly the electron fraction of the bias current and thus the amount of co-extracted electrons. Dependencies of the source performance on the determined plasma parameters are presented for the comparison of two source pressures (0.6 Pa, 0.45 Pa) in hydrogen operation. The higher source pressure of 0.6 Pa is a standard point of operation at BATMAN with external magnets, whereas the lower pressure of 0.45 Pa is closer to the ITER requirements (p ≤ 0.3 Pa).

  11. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there

  12. A study on the seismic source parameters for earthquakes occurring in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H. M.; Sheen, D. H.

    2015-12-01

    We investigated the characteristics of the seismic source parameters of the southern part of the Korean Peninsula for the 599 events with ML≥1.7 from 2001 to 2014. A large number of data are carefully selected by visual inspection in the time and frequency domains. The data set consist of 5,093 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. The corner frequency, stress drop, and moment magnitude of each event were measured by using the modified method of Jo and Baag (2001), based on the methods of Snoke (1987) and Andrews (1986). We found that this method could improve the stability of the estimation of source parameters from S-wave displacement spectrum by an iterative process. Then, we compared the source parameters with those obtained from previous studies and investigated the source scaling relationship and the regional variations of source parameters in the southern Korean Peninsula.

  13. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  14. High-frequency observations and source parameters of microearthquakes recorded at hard-rock sites

    USGS Publications Warehouse

    Cranswick, Edward; Wetmiller, Robert; Boatwright, John

    1985-01-01

    We have estimated the source parameters of 53 microearthquakes recorded in July 1983 which were aftershocks of the Miramichi, New Brunswick, earthquake that occurred on 9 January 1982. These events were recorded by local three-component digital seismographs at 400 sps/component from 2-Hz velocity transducers sited directly on glacially scoured crystalline basement outcrop. Hypocentral distances are typically less than 5 km, and the hypocenters and the seven digital seismograph stations established all lie essentially within the boundaries of a granitic pluton that encompasses the faults that ruptured during the main shock and major aftershocks. The P-wave velocity is typically 5 km/sec at the surface and at least 6 km/sec at depths greater than about 1 km.The events have S-wave corner frequencies in the band 10 to 40 Hz, and the calculated Brune model seismic moments range from 1015 to 1018 dyne-cm. The corresponding stress drops are generally less than 1.0 bars, but there is considerable evidence that the seismic-source signals have been modified by propagation and/or site-effects. The data indicate: (a) there is a velocity discontinuity at 0.5 km depth; (b) the top layer has strong scattering/attenuating properties; (c) some source-receiver paths differentiate the propagated signal; (d) there is a hard-rock-site P-wave “fmax” between 50 and 100 Hz; and (e) some hard-rock sites are characterized by P-wave resonance frequencies in the range 50 to 100 Hz. Comparison of this dataset with the January 1982 New Brunswick digital seismograms which were recorded at sites underlain by several meters of low-velocity surface sediments suggests that some of the hard-rock-site phenomena listed above can be explained in terms of a layer-over-a-half-space model. For microearthquakes, this result implies that spectrally determined source dimension scales with site dimension (thickness of the layer). More generally, it emphasizes that it is very difficult to accurately observe

  15. REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION

    EPA Science Inventory

    This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...

  16. Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.

    2018-03-01

    A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.

  17. Calibration of the Regional Crustal Waveguide and the Retrieval of Source Parameters Using Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Woods, B. B.; Thio, H. K.

    - Regional crustal waveguide calibration is essential to the retrieval of source parameters and the location of smaller (M<4.8) seismic events. This path calibration of regional seismic phases is strongly dependent on the accuracy of hypocentral locations of calibration (or master) events. This information can be difficult to obtain, especially for smaller events. Generally, explosion or quarry blast generated travel-time data with known locations and origin times are useful for developing the path calibration parameters, but in many regions such data sets are scanty or do not exist. We present a method which is useful for regional path calibration independent of such data, i.e. with earthquakes, which is applicable for events down to Mw = 4 and which has successfully been applied in India, central Asia, western Mediterranean, North Africa, Tibet and the former Soviet Union. These studies suggest that reliably determining depth is essential to establishing accurate epicentral location and origin time for events. We find that the error in source depth does not necessarily trade-off only with the origin time for events with poor azimuthal coverage, but with the horizontal location as well, thus resulting in poor epicentral locations. For example, hypocenters for some events in central Asia were found to move from their fixed-depth locations by about 20km. Such errors in location and depth will propagate into path calibration parameters, particularly with respect to travel times. The modeling of teleseismic depth phases (pP, sP) yields accurate depths for earthquakes down to magnitude Mw = 4.7. This Mwthreshold can be lowered to four if regional seismograms are used in conjunction with a calibrated velocity structure model to determine depth, with the relative amplitude of the Pnl waves to the surface waves and the interaction of regional sPmP and pPmP phases being good indicators of event depths. We also found that for deep events a seismic phase which follows an S

  18. The metallicity of M4: Accurate spectroscopic fundamental parameters for four giants

    NASA Technical Reports Server (NTRS)

    Drake, J. J.; Smith, V. V.; Suntzeff, N. B.

    1994-01-01

    High-quality spectra, covering the wavelength range 5480 to 7080 A, have been obtained for four giant stars in the intermediate-metallicity CN-bimodal globular cluster M4 (NGC 6121). We have employed a model atmosphere analysis that is entirely independent from cluster parameters, such as distance, age, and reddening, in order to derive accurate values for the stellar parameters effective temperature, surface gravity, and microturbulence, and for the abundance of iron relative to the Sun, (Fe/H), and of calcium, Ca/H, for each of the four stars. Detailed radiative transfer and statistical equilibrium calculations carried out for iron and calcium suggest that departures from local thermodynamic equilibrium are not significant for the purposes of our analysis. The spectroscopically derived effective temperatures for our program stars are hotter by about 200 K than existing photometric calibrations suggest. We conclude that this is due partly to the uncertain reddening of M4 and to the existing photometric temperature calibration for red giants being too cool by about 100 K. Comparison of our spectroscopic and existing photometric temperatures supports the prognosis of a significant east-west gradient in the reddening across M4. Our derived iron abundances are slightly higher than previous high-resolution studies suggested; the differences are most probably due to the different temperature scale and choice of microturbulent velocities adopted by earlier workers. The resulting value for the metallicity of M4 is (Fe/H )(sub M4) = -1.05 + or - 0.15. Based on this result, we suggest that metallicities derived in previous high-dispersion globular cluster abundance analyses could be too low by 0.2 to 0.3 dex. Our calcium abundances suggest an enhancement of calcium, an alpha element, over iron, relative to the Sun, in M4 of (Ca/H) = 0.23.

  19. Source parameters controlling the generation and propagation of potential local tsunamis along the cascadia margin

    USGS Publications Warehouse

    Geist, E.; Yoshioka, S.

    1996-01-01

    The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.

  20. SU-F-T-54: Determination of the AAPM TG-43 Brachytherapy Dosimetry Parameters for A New Titanium-Encapsulated Yb-169 Source by Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Washington University School of Medicine, St. Louis, MO; Munro, J

    2016-06-15

    Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parametersmore » for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.« less

  1. Preliminary Spreadsheet of Eruption Source Parameters for Volcanoes of the World

    USGS Publications Warehouse

    Mastin, Larry G.; Guffanti, Marianne; Ewert, John W.; Spiegel, Jessica

    2009-01-01

    Volcanic eruptions that spew tephra into the atmosphere pose a hazard to jet aircraft. For this reason, the International Civil Aviation Organization (ICAO) has designated nine Volcanic Ash and Aviation Centers (VAACs) around the world whose purpose is to track ash clouds from eruptions and notify aircraft so that they may avoid these ash clouds. During eruptions, VAACs and their collaborators run volcanic-ashtransport- and-dispersion (VATD) models that forecast the location and movement of ash clouds. These models require as input parameters the plume height H, the mass-eruption rate , duration D, erupted volume V (in cubic kilometers of bubble-free or 'dense rock equivalent' [DRE] magma), and the mass fraction of erupted tephra with a particle size smaller than 63 um (m63). Some parameters, such as mass-eruption rate and mass fraction of fine debris, are not obtainable by direct observation; others, such as plume height or duration, are obtainable from observations but may be unavailable in the early hours of an eruption when VATD models are being initiated. For this reason, ash-cloud modelers need to have at their disposal source parameters for a particular volcano that are based on its recent eruptive history and represent the most likely anticipated eruption. They also need source parameters that encompass the range of uncertainty in eruption size or characteristics. In spring of 2007, a workshop was held at the U.S. Geological Survey (USGS) Cascades Volcano Observatory to derive a protocol for assigning eruption source parameters to ash-cloud models during eruptions. The protocol derived from this effort was published by Mastin and others (in press), along with a world map displaying the assigned eruption type for each of the world's volcanoes. Their report, however, did not include the assigned eruption types in tabular form. Therefore, this Open-File Report presents that table in the form of an Excel spreadsheet. These assignments are preliminary and will

  2. Influence of source parameters on the growth of metal nanoparticles by sputter-gas-aggregation

    NASA Astrophysics Data System (ADS)

    Khojasteh, Malak; Kresin, Vitaly V.

    2017-11-01

    We describe the production of size-selected manganese nanoclusters using a magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length), we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way, it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the metal vapor, and argon also plays a crucial role in the formation of condensation nuclei via three-body collisions. However, the argon flow and the discharge power have a relatively weak effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, governed by the helium supply (which raises the pressure and density of the gas column inside the source, resulting in more efficient transport of nanoparticles to the exit) and by the aggregation path length.

  3. Constructing Ebola transmission chains from West Africa and estimating model parameters using internet sources.

    PubMed

    Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V

    2017-07-01

    During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.

  4. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  5. An open-source job management framework for parameter-space exploration: OACIS

    NASA Astrophysics Data System (ADS)

    Murase, Y.; Uchitane, T.; Ito, N.

    2017-11-01

    We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.

  6. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  7. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  8. Proton dissociation properties of arylphosphonates: Determination of accurate Hammett equation parameters.

    PubMed

    Dargó, Gergő; Bölcskei, Adrienn; Grün, Alajos; Béni, Szabolcs; Szántó, Zoltán; Lopata, Antal; Keglevich, György; Balogh, György T

    2017-09-05

    Determination of the proton dissociation constants of several arylphosphonic acid derivatives was carried out to investigate the accuracy of the Hammett equations available for this family of compounds. For the measurement of the pK a values modern, accurate methods, such as the differential potentiometric titration and NMR-pH titration were used. We found our results significantly different from the pK a values reported before (pK a1 : MAE = 0.16 pK a2 : MAE=0.59). Based on our recently measured pK a values, refined Hammett equations were determined that might be used for predicting highly accurate ionization constants of newly synthesized compounds (pK a1 =1.70-0.894σ, pK a2 =6.92-0.934σ). Copyright © 2017 Elsevier B.V. All rights reserved.

  9. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  10. Accurate Simulation of Acoustic Emission Sources in Composite Plates

    NASA Technical Reports Server (NTRS)

    Prosser, W. H.; Gorman, M. R.

    1994-01-01

    Acoustic emission (AE) signals propagate as the extensional and flexural plate modes in thin composite plates and plate-like geometries such as shells, pipes, and tubes. The relative amplitude of the two modes depends on the directionality of the source motion. For source motions with large out-of-plane components such as delaminations or particle impact, the flexural or bending plate mode dominates the AE signal with only a small extensional mode detected. A signal from such a source is well simulated with the standard pencil lead break (Hsu-Neilsen source) on the surface of the plate. For other sources such as matrix cracking or fiber breakage in which the source motion is primarily in-plane, the resulting AE signal has a large extensional mode component with little or no flexural mode observed. Signals from these type sources can also be simulated with pencil lead breaks. However, the lead must be fractured on the edge of the plate to generate an in-plane source motion rather than on the surface of the plate. In many applications such as testing of pressure vessels and piping or aircraft structures, a free edge is either not available or not in a desired location for simulation of in-plane type sources. In this research, a method was developed which allows the simulation of AE signals with a predominant extensional mode component in composite plates requiring access to only the surface of the plate.

  11. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  12. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  13. Modeling of the dolphin's clicking sound source: The influence of the critical parameters

    NASA Astrophysics Data System (ADS)

    Dubrovsky, N. A.; Gladilin, A.; Møhl, B.; Wahlberg, M.

    2004-07-01

    A physical and a mathematical models of the dolphin’s source of echolocation clicks have been recently proposed. The physical model includes a bottle of pressurized air connected to the atmosphere with an underwater rubber tube. A compressing rubber ring is placed on the underwater portion of the tube. The ring blocks the air jet passing through the tube from the bottle. This ring can be brought into self-oscillation by the air jet. In the simplest case, the ring displacement follows a repeated triangular waveform. Because the acoustic pressure gradient is proportional to the second time derivative of the displacement, clicks arise at the bends of the displacement waveform. The mathematical model describes the dipole oscillations of a sphere “frozen” in the ring and calculates the waveform and the sound pressure of the generated clicks. The critical parameters of the mathematical model are the radius of the sphere and the peak value and duration of the triangular displacement curve. This model allows one to solve both the forward (deriving the properties of acoustic clicks from the known source parameters) and the inverse (calculating the source parameters from the acoustic data) problems. Data from click records of Odontocetes were used to derive both the displacement waveforms and the size of the “frozen sphere” or a structure functionally similar to it. The mathematical model predicts a maximum source level of up to 235 dB re 1 μPa at 1-m range when using a 5-cm radius of the “frozen” sphere and a 4-mm maximal displacement. The predicted sound pressure level is similar to that of the clicks produced by Odontocetest.

  14. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical

  15. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  16. Transition to a Source with Modified Physical Parameters by Energy Supply or Using an External Force

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.

    2017-11-01

    A study has been made of the possibility for the physical parameters of a source/sink, i.e., for the enthalpy, temperature, total pressure, maximum velocity, and minimum dimension, at a constant radial Mach number to be changed by energy or force action on the gas in a bounded zone. It has been shown that the parameters can be controlled at a subsonic, supersonic, and transonic (sonic in the limit) radial Mach number. In the updated source/sink, all versions of a vortex-source combination can be implemented: into a vacuum, out of a vacuum, into a submerged space, and out of a submerged space, partially or fully.

  17. OpenMC In Situ Source Convergence Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less

  18. Source parameters derived from seismic spectrum in the Jalisco block

    NASA Astrophysics Data System (ADS)

    Gutierrez, Q. J.; Escudero, C. R.; Nunez-Cornu, F. J.

    2012-12-01

    The direct measure of the earthquake fault dimension represent a complicated task nevertheless a better approach is using the seismic waves spectrum. With this method we can estimate the dimensions of the fault, the stress drop and the seismic moment. The study area comprises the complex tectonic configuration of Jalisco block and the subduction of the Rivera plate beneath the North American plate; this causes that occur in Jalisco some of the most harmful earthquakes and other related natural disasters. Accordingly it is important to monitor and perform studies that helps to understand the physics of earthquake rupture mechanism in the area. The main proposue of this study is estimate earthquake seismic source parameters. The data was recorded by the MARS network (Mapping the Riviera Subduction Zone) and the RESAJ network. MARS had 51 stations and settled in the Jalisco block; that is delimited by the mesoamerican trench at the west, the Colima grabben to the south, and the Tepic-Zacoalco to the north; for a period of time, of January 1, 2006 until December 31, 2007 Of this network was taken 104 events, the magnitude range of these was between 3 to 6.5 MB. RESJAL has 10 stations and is within the state of Jalisco, began to record since October 2011 and continues to record. We firs remove the trend, the mean and the instrument response, then manually chosen the S wave, then the multitaper method was used to obtain the spectrum of this wave and so estimate the corner frequency and the spectra level. We substitude the obtained in the equations of the Brune model to calculate the source parameters. Doing this we obtained the following results; the source radius was between .1 to 2 km, the stress drop was between .1 to 2 MPa.

  19. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  20. Linking source region and ocean wave parameters with the observed primary microseismic noise

    NASA Astrophysics Data System (ADS)

    Juretzek, C.; Hadziioannou, C.

    2017-12-01

    In previous studies, the contribution of Love waves to the primary microseismic noise field was found to be comparable to those of Rayleigh waves. However, so far only few studies analysed both wave types present in this microseismic noise band, which is known to be generated in shallow water and the theoretical understanding has mainly evolved for Rayleigh waves only. Here, we study the relevance of different source region parameters on the observed primary microseismic noise levels of Love and Rayleigh waves simultaneously. By means of beamforming and correlation of seismic noise amplitudes with ocean wave heights in the period band between 12 and 15 s, we analysed how source areas of both wave types compare with each other around Europe. The generation effectivity in different source regions was compared to ocean wave heights, peak ocean gravity wave propagation direction and bathymetry. Observed Love wave noise amplitudes correlate comparably well with near coastal ocean wave parameters as Rayleigh waves. Some coastal regions serve as especially effective sources for one or the other wave type. These coincide not only with locations of high wave heights but also with complex bathymetry. Further, Rayleigh and Love wave noise amplitudes seem to depend equally on the local ocean wave heights, which is an indication for a coupled variation with swell height during the generation of both wave types. However, the wave-type ratio varies directionally. This observation likely hints towards a spatially varying importance of different source mechanisms or structural influences. Further, the wave-type ratio is modulated depending on peak ocean wave propagation directions which could indicate a variation of different source mechanism strengths but also hints towards an imprint of an effective source radiation pattern. This emphasizes that the inclusion of both wave types may provide more constraints for the understanding of acting generation mechanisms.

  1. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  2. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  3. Earthquake Source Parameters Inferred from T-Wave Observations

    NASA Astrophysics Data System (ADS)

    Perrot, J.; Dziak, R.; Lau, T. A.; Matsumoto, H.; Goslin, J.

    2004-12-01

    The seismicity of the North Atlantic Ocean has been recorded by two networks of autonomous hydrophones moored within the SOFAR channel on the flanks of the Mid-Atlantic Ridge (MAR). In February 1999, a consortium of U.S. investigators (NSF and NOAA) deployed a 6-element hydrophone array for long-term monitoring of MAR seismicity between 15o-35oN south of the Azores. In May 2002, an international collaboration of French, Portuguese, and U.S. researchers deployed a 6-element hydrophone array north of the Azores Plateau from 40o-50oN. The northern network (referred to as SIRENA) was recovered in September 2003. The low attenuation properties of the SOFAR channel for earthquake T-wave propagation results in a detection threshold reduction from a magnitude completeness level (Mc) of ˜ 4.7 for MAR events recorded by the land-based seismic networks to Mc=3.0 using hydrophone arrays. Detailed focal depth and mechanism information, however, remain elusive due to the complexities of seismo-acoustic propagation paths. Nonetheless, recent analyses (Dziak, 2001; Park and Odom, 2001) indicate fault parameter information is contained within the T-wave signal packet. We investigate this relationship further by comparing an earthquake's T-wave duration and acoustic energy to seismic magnitude (NEIC) and radiation pattern (for events M>5) from the Harvard moment-tensor catalog. First results show earthquake energy is well represented by the acoustic energy of the T-waves, however T-wave codas are significantly influenced by acoustic propagation effects and do not allow a direct determination of the seismic magnitude of the earthquakes. Second, there appears to be a correlation between T-wave acoustic energy, azimuth from earthquake source to the hydrophone, and the radiation pattern of the earthquake's SH waves. These preliminary results indicate there is a relationship between the T-wave observations and earthquake source parameters, allowing for additional insights into T

  4. Dosimetric parameters of three new solid core I‐125 brachytherapy sources

    PubMed Central

    Solberg, Timothy D.; DeMarco, John J.; Hugo, Geoffrey; Wallace, Robert E.

    2002-01-01

    Monte Carlo calculations and TLD measurements have been performed for the purpose of characterizing dosimetric properties of new commercially available brachytherapy sources. All sources tested consisted of a solid core, upon which a thin layer of I125 has been adsorbed, encased within a titanium housing. The PharmaSeed BT‐125 source manufactured by Syncor is available in silver or palladium core configurations while the ADVANTAGE source from IsoAid has silver only. Dosimetric properties, including the dose rate constant, radial dose function, and anisotropy characteristics were determined according to the TG‐43 protocol. Additionally, the geometry function was calculated exactly using Monte Carlo and compared with both the point and line source approximations. The 1999 NIST standard was followed in determining air kerma strength. Dose rate constants were calculated to be 0.955±0.005,0.967±0.005, and 0.962±0.005 cGyh−1U−1 for the PharmaSeed BT‐125‐1, BT‐125‐2, and ADVANTAGE sources, respectively. TLD measurements were in excellent agreement with Monte Carlo calculations. Radial dose function, g(r), calculated to a distance of 10 cm, and anisotropy function F(r, θ), calculated for radii from 0.5 to 7.0 cm, were similar among all source configurations. Anisotropy constants, ϕ¯an, were calculated to be 0.941, 0.944, and 0.960 for the three sources, respectively. All dosimetric parameters were found to be in close agreement with previously published data for similar source configurations. The MCNP Monte Carlo code appears to be ideally suited to low energy dosimetry applications. PACS number(s): 87.53.–j PMID:11958652

  5. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing

    2015-09-01

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.

  6. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less

  7. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  8. Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology

    2010-12-01

    Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium

  9. Solar wind: Internal parameters driven by external source

    NASA Technical Reports Server (NTRS)

    Chertkov, A. D.

    1995-01-01

    A new concept interpreting solar wind parameters is suggested. The process of increasing twofold of a moving volume in the solar wind (with energy transfer across its surface which is comparable with its whole internal energy) is a more rapid process than the relaxation for the pressure. Thus, the solar wind is unique from the point of view of thermodynamics of irreversible processes. The presumptive source of the solar wind creation - the induction electric field of the solar origin - has very low entropy. The state of interplanetary plasma must be very far from the thermodynamic equilibrium. Plasma internal energy is contained mainly in non-degenerate forms (plasma waves, resonant plasma oscillations, electric currents). Microscopic oscillating electric fields in the solar wind plasma should be about 1 V/m. It allows one to describe the solar wind by simple dissipative MHD equations with small effective mean free path (required for hydrodynamical description), low value of electrical conductivity combined with very big apparent thermal conductivity (required for observed solar wind acceleration). These internal parameters are interrelated only due to their origin: they are externally driven. Their relation can change during the interaction of solar wind plasma with an obstacle (planet, spacecraft). The concept proposed can be verified by the special electric field measurements, not ruining the primordial plasma state.

  10. Formation of manganese nanoclusters in a sputtering/aggregation source and the roles of individual operating parameters

    NASA Astrophysics Data System (ADS)

    Khojasteh, Malak; Kresin, Vitaly V.

    2016-12-01

    We describe the production of size selected manganese nanoclusters using a dc magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length) we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the atomic vapor, and argon also plays the crucial role in the formation of condensation nuclei via three-body collisions. However, neither the argon flow nor the discharge power have a strong effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, which is governed by the helium supply and the aggregation path length. The size of mass selected nanoclusters was verified by atomic force microscopy of deposited particles.

  11. Optical Riblet Sensor: Beam Parameter Requirements for the Probing Laser Source.

    PubMed

    Tschentscher, Juliane; Hochheim, Sven; Brüning, Hauke; Brune, Kai; Voit, Kay-Michael; Imlau, Mirco

    2016-03-30

    Beam parameters of a probing laser source in an optical riblet sensor are studied by considering the high demands on a sensors' precision and reliability for the determination of deviations of the geometrical shape of a riblet. Mandatory requirements, such as minimum intensity and light polarization, are obtained by means of detailed inspection of the optical response of the riblet using ray and wave optics; the impact of wavelength is studied. Novel measures for analyzing the riblet shape without the necessity of a measurement with a reference sample are derived; reference values for an ideal riblet structure obtained with the optical riblet sensor are given. The application of a low-cost, frequency-doubled Nd:YVO₄ laser pointer sufficient to serve as a reliable laser source in an appropriate optical riblet sensor is discussed.

  12. A new qualitative acoustic emission parameter based on Shannon's entropy for damage monitoring

    NASA Astrophysics Data System (ADS)

    Chai, Mengyu; Zhang, Zaoxiao; Duan, Quan

    2018-02-01

    An important objective of acoustic emission (AE) non-destructive monitoring is to accurately identify approaching critical damage and to avoid premature failure by means of the evolutions of AE parameters. One major drawback of most parameters such as count and rise time is that they are strongly dependent on the threshold and other settings employed in AE data acquisition system. This may hinder the correct reflection of original waveform generated from AE sources and consequently bring difficulty for the accurate identification of the critical damage and early failure. In this investigation, a new qualitative AE parameter based on Shannon's entropy, i.e. AE entropy is proposed for damage monitoring. Since it derives from the uncertainty of amplitude distribution of each AE waveform, it is independent of the threshold and other time-driven parameters and can characterize the original micro-structural deformations. Fatigue crack growth test on CrMoV steel and three point bending test on a ductile material are conducted to validate the feasibility and effectiveness of the proposed parameter. The results show that the new parameter, compared to AE amplitude, is more effective in discriminating the different damage stages and identifying the critical damage.

  13. Near real-time estimation of the seismic source parameters in a compressed domain

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ismael A. Vera

    Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.

  14. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    PubMed

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  15. Monte Carlo calculated TG-60 dosimetry parameters for the {beta}{sup -} emitter {sup 153}Sm brachytherapy source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.

    Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters ofmore » AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.« less

  16. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the

  17. American College of Radiology-American Brachytherapy Society practice parameter for electronically generated low-energy radiation sources.

    PubMed

    Devlin, Phillip M; Gaspar, Laurie E; Buzurovic, Ivan; Demanes, D Jeffrey; Kasper, Michael E; Nag, Subir; Ouhib, Zoubir; Petit, Joshua H; Rosenthal, Seth A; Small, William; Wallner, Paul E; Hartford, Alan C

    This collaborative practice parameter technical standard has been created between the American College of Radiology and American Brachytherapy Society to guide the usage of electronically generated low energy radiation sources (ELSs). It refers to the use of electronic X-ray sources with peak voltages up to 120 kVp to deliver therapeutic radiation therapy. The parameter provides a guideline for utilizing ELS, including patient selection and consent, treatment planning, and delivery processes. The parameter reviews the published clinical data with regard to ELS results in skin, breast, and other cancers. This technical standard recommends appropriate qualifications of the involved personnel. The parameter reviews the technical issues relating to equipment specifications as well as patient and personnel safety. Regarding suggestions for educational programs with regard to this parameter,it is suggested that the training level for clinicians be equivalent to that for other radiation therapies. It also suggests that ELS must be done using the same standards of quality and safety as those in place for other forms of radiation therapy. Copyright © 2017 American Brachytherapy Society and American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. Accurate monoenergetic electron parameters of laser wakefield in a bubble model

    NASA Astrophysics Data System (ADS)

    Raheli, A.; Rahmatallahpur, S. H.

    2012-11-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived. The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal model and it explains the mono-energetic electron trajectory more accurately, especially at the relativistic region. As a result, the quasi-mono-energetic electrons output beam interacting with the laser plasma can be more appropriately described with this model.

  19. The role of cognitive switching in head-up displays. [to determine pilot ability to accurately extract information from either of two sources

    NASA Technical Reports Server (NTRS)

    Fischer, E.

    1979-01-01

    The pilot's ability to accurately extract information from either one or both of two superimposed sources of information was determined. Static, aerial, color 35 mm slides of external runway environments and slides of corresponding static head-up display (HUD) symbology were used as the sources. A three channel tachistoscope was utilized to show either the HUD alone, the scene alone, or the two slides superimposed. Cognitive performance of the pilots was assessed by determining the percentage of correct answers given to two HUD related questions, two scene related questions, or one HUD and one scene related question.

  20. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  1. Global Source Parameters from Regional Spectral Ratios for Yield Transportability Studies

    NASA Astrophysics Data System (ADS)

    Phillips, W. S.; Fisk, M. D.; Stead, R. J.; Begnaud, M. L.; Rowe, C. A.

    2016-12-01

    We use source parameters such as moment, corner frequency and high frequency rolloff as constraints in amplitude tomography, ensuring that spectra of well-studied earthquakes are recovered using the ensuing attenuation and site term model. We correct explosion data for path and site effects using such models, which allows us to test transportability of yield estimation techniques based on our best source spectral estimates. To develop a background set of source parameters, we applied spectral ratio techniques to envelopes of a global set of regional distance recordings from over 180,000 crustal events. Corner frequencies and moment ratios were determined via inversion using all event pairs within predetermined clusters, shifting to absolute levels using independently determined regional and teleseismic moments. The moment and corner frequency results can be expressed as stress drop, which has considerable scatter, yet shows dramatic regional patterns. We observe high stress in subduction zones along S. America, S. Mexico, the Banda Sea, and associated with the Yakutat Block in Alaska. We also observe high stress at the Himalayan syntaxes, the Pamirs, eastern Iran, the Caspian, the Altai-Sayan, and the central African rift. Low stress is observed along mid ocean spreading centers, the Afar rift, patches of convergence zones such as Nicaragua, the Zagros, Tibet, and the Tien Shan, among others. Mine blasts appear as low stress events due to their low corners and steep rolloffs. Many of these anomalies have been noted by previous studies, and we plan to compare results directly. As mentioned, these results will be used to constrain tomographic imaging, but can also be used in model validation procedures similar to the use of ground truth in location problems, and, perhaps most importantly, figure heavily in quality control of local and regional distance amplitude measurements.

  2. Optical Riblet Sensor: Beam Parameter Requirements for the Probing Laser Source

    PubMed Central

    Tschentscher, Juliane; Hochheim, Sven; Brüning, Hauke; Brune, Kai; Voit, Kay-Michael; Imlau, Mirco

    2016-01-01

    Beam parameters of a probing laser source in an optical riblet sensor are studied by considering the high demands on a sensors’ precision and reliability for the determination of deviations of the geometrical shape of a riblet. Mandatory requirements, such as minimum intensity and light polarization, are obtained by means of detailed inspection of the optical response of the riblet using ray and wave optics; the impact of wavelength is studied. Novel measures for analyzing the riblet shape without the necessity of a measurement with a reference sample are derived; reference values for an ideal riblet structure obtained with the optical riblet sensor are given. The application of a low-cost, frequency-doubled Nd:YVO4 laser pointer sufficient to serve as a reliable laser source in an appropriate optical riblet sensor is discussed. PMID:27043567

  3. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    PubMed Central

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  4. Relationship between Dietary Fat Intake, Its Major Food Sources and Assisted Reproduction Parameters

    PubMed Central

    Kazemi, Ashraf; Ramezanzadeh, Fatemeh; Nasr-Esfahani, Mohammad Hosein

    2014-01-01

    Background High dietary fat consumption may alter oocyte development and embryonic development. This prospective study was conducted to determine the relation between dietary fat consumption level, its food sources and the assisted reproduction parameters. Methods A prospective study was conducted on 240 infertile women. In assisted reproduction treatment cycle, fat consumption and major food sources over the previous three months were identified. The number of retrieved oocytes, metaphase ΙΙ stage oocytes numbers, fertilization rate, embryo quality and clinical pregnancy rate were also determined. The data were analyzed using multiple regression, binary logistic regression, chi-square and t-test. The p-value of less than 0.05 was considered significant. Results Total fat intake adjusted for age, body mass index, physical activity and etiology of infertility was positively associated with the number of retrieved oocytes and inversely associated with the high embryo quality rate. An inverse association was observed between sausage and turkey ham intake and the number of retrieved oocytes. Also, oil intake level had an inverse association with good cleavage rate. Conclusion The results revealed that higher levels of fat consumption tend to increase the number of retrieved oocytes and were adversely related to embryonic development. Among food sources of fat, vegetable oil, sausage and turkey ham intake may adversely affect assisted reproduction parameters. PMID:25473630

  5. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8

  6. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of

  7. New Theory for Tsunami Propagation and Estimation of Tsunami Source Parameters

    NASA Astrophysics Data System (ADS)

    Mindlin, I. M.

    2007-12-01

    In numerical studies based on the shallow water equations for tsunami propagation, vertical accelerations and velocities within the sea water are neglected, so a tsunami is usually supposed to be produced by an initial free surface displacement in the initially still sea. In the present work, new theory for tsunami propagation across the deep sea is discussed, that accounts for the vertical accelerations and velocities. The theory is based on the solutions for the water surface displacement obtained in [Mindlin I.M. Integrodifferential equations in dynamics of a heavy layered liquid. Moscow: Nauka*Fizmatlit, 1996 (Russian)]. The solutions are valid when horizontal dimensions of the initially disturbed area in the sea surface are much larger than the vertical displacement of the surface, which applies to the earthquake tsunamis. It is shown that any tsunami is a combination of specific basic waves found analytically (not superposition: the waves are nonlinear), and consequently, the tsunami source (i.e., the initially disturbed body of water) can be described by the numerable set of the parameters involved in the combination. Thus the problem of theoretical reconstruction of a tsunami source is reduced to the problem of estimation of the parameters. The tsunami source can be modelled approximately with the use of a finite number of the parameters. Two-parametric model is discussed thoroughly. A method is developed for estimation of the model's parameters using the arrival times of the tsunami at certain locations, the maximum wave-heights obtained from tide gauge records at the locations, and the distances between the earthquake's epicentre and each of the locations. In order to evaluate the practical use of the theory, four tsunamis of different magnitude occurred in Japan are considered. For each of the tsunamis, the tsunami energy (E below), the duration of the tsunami source formation T, the maximum water elevation in the wave originating area H, mean radius of

  8. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  9. A SPME-based method for rapidly and accurately measuring the characteristic parameter for DEHP emitted from PVC floorings.

    PubMed

    Cao, J; Zhang, X; Little, J C; Zhang, Y

    2017-03-01

    Semivolatile organic compounds (SVOCs) are present in many indoor materials. SVOC emissions can be characterized with a critical parameter, y 0 , the gas-phase SVOC concentration in equilibrium with the source material. To reduce the required time and improve the accuracy of existing methods for measuring y 0 , we developed a new method which uses solid-phase microextraction (SPME) to measure the concentration of an SVOC emitted by source material placed in a sealed chamber. Taking one typical indoor SVOC, di-(2-ethylhexyl) phthalate (DEHP), as the example, the experimental time was shortened from several days (even several months) to about 1 day, with relative errors of less than 5%. The measured y 0 values agree well with the results obtained by independent methods. The saturated gas-phase concentration (y sat ) of DEHP was also measured. Based on the Clausius-Clapeyron equation, a correlation that reveals the effects of temperature, the mass fraction of DEHP in the source material, and y sat on y 0 was established. The proposed method together with the correlation should be useful in estimating and controlling human exposure to indoor DEHP. The applicability of the present approach for other SVOCs and other SVOC source materials requires further study. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  11. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  12. Regional propagation characteristics and source parameters of earthquakes in northeastern North America

    USGS Publications Warehouse

    Boatwright, John

    1994-01-01

    The vertical components of the S wave trains recorded on the Eastern Canadian Telemetered Network (ECTN) from 1980 through 1990 have been spectrally analyzed for source, site, and propagation characteristics. The data set comprises some 1033 recordings of 97 earthquakes whose magnitudes range from M ≈ 3 to 6. The epicentral distances range from 15 to 1000 km, with most of the data set recorded at distances from 200 to 800 km. The recorded S wave trains contain the phases S, SmS, Sn, and Lg and are sampled using windows that increase with distance; the acceleration spectra were analyzed from 1.0 to 10 Hz. To separate the source, site, and propagation characteristics, an inversion for the earthquake corner frequencies, low-frequency levels, and average attenuation parameters is alternated with a regression of residuals onto the set of stations and a grid of 14 distances ranging from 25 to 1000 km. The iteration between these two parts of the inversion converges in about 60 steps. The average attenuation parameters obtained from the inversion were Q = 1997 ± 10 and γ = 0.998 ± 0.003. The most pronounced variation from this average attenuation is a marked deamplification of more than a factor of 2 at 63 km and 2 Hz, which shallows with increasing frequency and increasing distance out to 200 km. The site-response spectra obtained for the ECTN stations are generally flat. The source spectral shape assumed in this inversion provides an adequate spectral model for the smaller events (Mo < 3 × 1021 dyne-cm) in the data set, whose Brune stress drops range from 5 to 150 bars. For the five events in the data set with Mo ≧ 1023 dyne-cm, however, the source spectra obtained by regressing the residuals suggest that an ω2 spectrum is an inadequate model for the spectral shape. In particular, the corner frequencies for most of these large events appear to be split, so that the spectra exhibit an intermediate behavior (where |ü(ω)| is roughly proportional to ω).

  13. Estimation of Nutation Time Constant Model Parameters for On-Axis Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Sudermann, James

    2008-01-01

    Calculating an accurate nutation time constant for a spinning spacecraft is an important step for ensuring mission success. Spacecraft nutation is caused by energy dissipation about the spin axis. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and can be simulated using a forced motion spin table. Mechanical analogs, such as pendulums and rotors, are typically used to simulate propellant slosh. A strong desire exists for an automated method to determine these analog parameters. The method presented accomplishes this task by using a MATLAB Simulink/SimMechanics based simulation that utilizes the Parameter Estimation Tool.

  14. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    NASA Astrophysics Data System (ADS)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections

  15. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    PubMed

    Ralph, Duncan K; Matsen, Frederick A

    2016-01-01

    VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.

  16. Noise disturbance in open-plan study environments: a field study on noise sources, student tasks and room acoustic parameters.

    PubMed

    Braat-Eggen, P Ella; van Heijst, Anne; Hornikx, Maarten; Kohlrausch, Armin

    2017-09-01

    The aim of this study is to gain more insight in the assessment of noise in open-plan study environments and to reveal correlations between noise disturbance experienced by students and the noise sources they perceive, the tasks they perform and the acoustic parameters of the open-plan study environment they work in. Data were collected in five open-plan study environments at universities in the Netherlands. A questionnaire was used to investigate student tasks, perceived sound sources and their perceived disturbance, and sound measurements were performed to determine the room acoustic parameters. This study shows that 38% of the surveyed students are disturbed by background noise in an open-plan study environment. Students are mostly disturbed by speech when performing complex cognitive tasks like studying for an exam, reading and writing. Significant but weak correlations were found between the room acoustic parameters and noise disturbance of students. Practitioner Summary: A field study was conducted to gain more insight in the assessment of noise in open-plan study environments at universities in the Netherlands. More than one third of the students was disturbed by noise. An interaction effect was found for task type, source type and room acoustic parameters.

  17. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. The impact of individual materials parameters on color temperature reproducibility among phosphor converted LED sources

    NASA Astrophysics Data System (ADS)

    Schweitzer, Susanne; Nemitz, Wolfgang; Sommer, Christian; Hartmann, Paul; Fulmek, Paul; Nicolics, Johann; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.

    2014-09-01

    For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for color temperature reproducibility under control. In this regard, it is imperative to understand how compositional, optical and materials properties of the color conversion element (CCE), which typically consists of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired color temperature of a white LED source. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board (PCB) by chip-on-board technology and a CCE with a glob-top configuration as a model system and discuss the impact of potential sources for color temperature deviation among individual devices. Parameters that are investigated include imprecisions in the amount of materials deposition, deviations from the target value for the phosphor concentration in the matrix material, deviations from the target value for the particle sizes of the phosphor material, deviations from the target values for the refractive indexes of phosphor and matrix material as well as deviations from the reflectivity of the substrate surface. From these studies, some general conclusions can be drawn which of these parameters have the largest impact on color deviation and have to be controlled most precisely in a fabrication process in regard of color temperature reproducibility among individual white LED sources.

  19. Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za

    2013-11-01

    Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less

  20. BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciambur, B. C., E-mail: bciambur@swin.edu.au

    2015-09-10

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial,more » cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.« less

  1. Preliminary Result of Earthquake Source Parameters the Mw 3.4 at 23:22:47 IWST, August 21, 2004, Centre Java, Indonesia Based on MERAMEX Project

    NASA Astrophysics Data System (ADS)

    Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.

    2018-03-01

    In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.

  2. Impact of fugitive sources and meteorological parameters on vertical distribution of particulate matter over the industrial agglomeration.

    PubMed

    Štrbová, Kristína; Raclavská, Helena; Bílek, Jiří

    2017-12-01

    The aim of the study was to characterize vertical distribution of particulate matter, in an area well known by highest air pollution levels in Europe. A balloon filled with helium with measuring instrumentation was used for vertical observation of air pollution over the fugitive sources in Moravian-Silesian metropolitan area during spring and summer. Synchronously, selected meteorological parameters were recorded together with particulate matter for exploration its relationship with particulate matter. Concentrations of particulate matter in the vertical profile were significantly higher in the spring than in the summer. Significant effect of fugitive sources was observed up to the altitude ∼255 m (∼45 m above ground) in both seasons. The presence of inversion layer was observed at the altitude ∼350 m (120-135 m above ground) at locations with major source traffic load. Both particulate matter concentrations and number of particles for the selected particle sizes decreased with increasing height. Strong correlation of particulate matter with meteorological parameters was not observed. The study represents the first attempt to assess the vertical profile over the fugitive emission sources - old environmental burdens in industrial region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Configuration of electro-optic fire source detection system

    NASA Astrophysics Data System (ADS)

    Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir

    2007-04-01

    The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.

  4. The Remote Food Photography Method accurately estimates dry powdered foods—the source of calories for many infants

    PubMed Central

    Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.

    2016-01-01

    Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under

  5. Monte Carlo Determination of Dosimetric Parameters of a New (125)I Brachytherapy Source According to AAPM TG-43 (U1) Protocol.

    PubMed

    Baghani, Hamid Reza; Lohrabian, Vahid; Aghamiri, Mahmoud Reza; Robatjazi, Mostafa

    2016-03-01

    (125)I is one of the important sources frequently used in brachytherapy. Up to now, several different commercial models of this source type have been introduced to the clinical radiation oncology applications. Recently, a new source model, IrSeed-125, has been added to this list. The aim of the present study is to determine the dosimetric parameters of this new source model based on the recommendations of TG-43 (U1) protocol using Monte Carlo simulation. The dosimetric characteristics of Ir-125 including dose rate constant, radial dose function, 2D anisotropy function and 1D anisotropy function were determined inside liquid water using MCNPX code and compared to those of other commercially available iodine sources. The dose rate constant of this new source was found to be 0.983+0.015 cGyh-1U-1 that was in good agreement with the TLD measured data (0.965 cGyh-1U-1). The 1D anisotropy function at 3, 5, and 7 cm radial distances were obtained as 0.954, 0.953 and 0.959, respectively. The results of this study showed that the dosimetric characteristics of this new brachytherapy source are comparable with those of other commercially available sources. Furthermore, the simulated parameters were in accordance with the previously measured ones. Therefore, the Monte Carlo calculated dosimetric parameters could be employed to obtain the dose distribution around this new brachytherapy source based on TG-43 (U1) protocol.

  6. Bayesian estimation of source parameters and associated Coulomb failure stress changes for the 2005 Fukuoka (Japan) Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes

    2018-04-01

    Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.

  7. A Numerical Study of the Non-Ideal Behavior, Parameters, and Novel Applications of an Electrothermal Plasma Source

    NASA Astrophysics Data System (ADS)

    Winfrey, A. Leigh

    Electrothermal plasma sources have numerous applications including hypervelocity launchers, fusion reactor pellet injection, and space propulsion systems. The time evolution of important plasma parameters at the source exit is important in determining the suitability of the source for different applications. In this study a capillary discharge code has been modified to incorporate non-ideal behavior by using an exact analytical model for the Coulomb logarithm in the plasma electrical conductivity formula. Actual discharge currents from electrothermal plasma experiments were used and code results for both ideal and non-ideal plasma models were compared to experimental data, specifically the ablated mass from the capillary and the electrical conductivity as measured by the discharge current and the voltage. Electrothermal plasma sources operating in the ablation-controlled arc regime use discharge currents with pulse lengths between 100 micros to 1 ms. Faster or longer or extended flat-top pulses can also be generated to satisfy various applications of ET sources. Extension of the peak current for up to an additional 1000 micros was tested. Calculations for non-ideal and ideal plasma models show that extended flattop pulses produce more ablated mass, which scales linearly with increased pulse length while other parameters remain almost constant. A new configuration of the PIPE source has been proposed in order to investigate the formation of plasmas from mixed materials. The electrothermal segmented plasma source can be used for studies related to surface coatings, surface modification, ion implantation, materials synthesis, and the physics of complex mixed plasmas. This source is a capillary discharge where the ablation liner is made from segments of different materials instead of a single sleeve. This system should allow for the modeling and characterization of the growth plasma as it provides all materials needed for fabrication through the same method. An

  8. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  9. Effect of source tuning parameters on the plasma potential of heavy ions in the 18 GHz high temperature superconducting electron cyclotron resonance ion source.

    PubMed

    Rodrigues, G; Baskaran, R; Kukrety, S; Mathur, Y; Kumar, Sarvesh; Mandal, A; Kanjilal, D; Roy, A

    2012-03-01

    Plasma potentials for various heavy ions have been measured using the retarding field technique in the 18 GHz high temperature superconducting ECR ion source, PKDELIS [C. Bieth, S. Kantas, P. Sortais, D. Kanjilal, G. Rodrigues, S. Milward, S. Harrison, and R. McMahon, Nucl. Instrum. Methods B 235, 498 (2005); D. Kanjilal, G. Rodrigues, P. Kumar, A. Mandal, A. Roy, C. Bieth, S. Kantas, and P. Sortais, Rev. Sci. Instrum. 77, 03A317 (2006)]. The ion beam extracted from the source is decelerated close to the location of a mesh which is polarized to the source potential and beams having different plasma potentials are measured on a Faraday cup located downstream of the mesh. The influence of various source parameters, viz., RF power, gas pressure, magnetic field, negative dc bias, and gas mixing on the plasma potential is studied. The study helped to find an upper limit of the energy spread of the heavy ions, which can influence the design of the longitudinal optics of the high current injector being developed at the Inter University Accelerator Centre. It is observed that the plasma potentials are decreasing for increasing charge states and a mass effect is clearly observed for the ions with similar operating gas pressures. In the case of gas mixing, it is observed that the plasma potential minimizes at an optimum value of the gas pressure of the mixing gas and the mean charge state maximizes at this value. Details of the measurements carried out as a function of various source parameters and its impact on the longitudinal optics are presented.

  10. Contrasts between source parameters of M [>=] 5. 5 earthquakes in northern Baja California and southern California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doser, D.I.

    1993-04-01

    Source parameters determined from the body waveform modeling of large (M [>=] 5.5) historic earthquakes occurring between 1915 and 1956 along the San Jacinto and Imperial fault zones of southern California and the Cerro Prieto, Tres Hermanas and San Miguel fault zones of Baja California have been combined with information from post-1960's events to study regional variations in source parameters. The results suggest that large earthquakes along the relatively young San Miguel and Tres Hermanas fault zones have complex rupture histories, small source dimensions (< 25 km), high stress drops (60 bar average), and a high incidence of foreshock activity.more » This may be a reflection of the rough, highly segmented nature of the young faults. In contrast, Imperial-Cerro Prieto events of similar magnitude have low stress drops (16 bar average) and longer rupture lengths (42 km average), reflecting rupture along older, smoother fault planes. Events along the San Jacinto fault zone appear to lie in between these two groups. These results suggest a relationship between the structural and seismological properties of strike-slip faults that should be considered during seismic risk studies.« less

  11. PULSED ION SOURCE

    DOEpatents

    Martina, E.F.

    1958-10-14

    An improved pulsed ion source of the type where the gas to be ionized is released within the source by momentary heating of an electrode occluded with the gas is presented. The other details of the ion source construction include an electron emitting filament and a positive reference grid, between which an electron discharge is set up, and electrode means for withdrawing the ions from the source. Due to the location of the gas source behind the electrode discharge region, and the positioning of the vacuum exhaust system on the opposite side of the discharge, the released gas is drawn into the electron discharge and ionized in accurately controlled amounts. Consequently, the output pulses of the ion source may be accurately controlled.

  12. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  13. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  14. Accurate Ray-tracing of Realistic Neutron Star Atmospheres for Constraining Their Parameters

    NASA Astrophysics Data System (ADS)

    Vincent, Frederic H.; Bejger, Michał; Różańska, Agata; Straub, Odele; Paumard, Thibaut; Fortin, Morgane; Madej, Jerzy; Majczyna, Agnieszka; Gourgoulhon, Eric; Haensel, Paweł; Zdunik, Leszek; Beldycki, Bartosz

    2018-03-01

    Thermal-dominated X-ray spectra of neutron stars in quiescent, transient X-ray binaries and neutron stars that undergo thermonuclear bursts are sensitive to mass and radius. The mass–radius relation of neutron stars depends on the equation of state (EoS) that governs their interior. Constraining this relation accurately is therefore of fundamental importance to understand the nature of dense matter. In this context, we introduce a pipeline to calculate realistic model spectra of rotating neutron stars with hydrogen and helium atmospheres. An arbitrarily fast-rotating neutron star with a given EoS generates the spacetime in which the atmosphere emits radiation. We use the LORENE/NROTSTAR code to compute the spacetime numerically and the ATM24 code to solve the radiative transfer equations self-consistently. Emerging specific intensity spectra are then ray-traced through the neutron star’s spacetime from the atmosphere to a distant observer with the GYOTO code. Here, we present and test our fully relativistic numerical pipeline. To discuss and illustrate the importance of realistic atmosphere models, we compare our model spectra to simpler models like the commonly used isotropic color-corrected blackbody emission. We highlight the importance of considering realistic model-atmosphere spectra together with relativistic ray-tracing to obtain accurate predictions. We also insist upon the crucial impact of the star’s rotation on the observables. Finally, we close a controversy that has been ongoing in the literature in the recent years, regarding the validity of the ATM24 code.

  15. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  16. Extragalactic radio sources - Accurate positions from very-long-baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Counselman, C. C., III; Hinteregger, H. F.; Knight, C. A.; Robertson, D. S.; Shapiro, I. I.; Whitney, A. R.; Clark, T. A.

    1973-01-01

    Relative positions for 12 extragalactic radio sources have been determined via wide-band very-long-baseline interferometry (wavelength of about 3.8 cm). The standard error, based on consistency between results from widely separated periods of observation, appears to be no more than 0.1 sec for each coordinate of the seven sources that were well observed during two or more periods. The uncertainties in the coordinates determined for the other five sources are larger, but in no case exceed 0.5 sec.

  17. Effects of sound source location and direction on acoustic parameters in Japanese churches.

    PubMed

    Soeta, Yoshiharu; Ito, Ken; Shimokura, Ryota; Sato, Shin-ichi; Ohsawa, Tomohiro; Ando, Yoichi

    2012-02-01

    In 1965, the Catholic Church liturgy changed to allow priests to face the congregation. Whereas Church tradition, teaching, and participation have been much discussed with respect to priest orientation at Mass, the acoustical changes in this regard have not yet been examined scientifically. To discuss acoustic desired within churches, it is necessary to know the acoustical characteristics appropriate for each phase of the liturgy. In this study, acoustic measurements were taken at various source locations and directions using both old and new liturgies performed in Japanese churches. A directional loudspeaker was used as the source to provide vocal and organ acoustic fields, and impulse responses were measured. Various acoustical parameters such as reverberation time and early decay time were analyzed. The speech transmission index was higher for the new Catholic liturgy, suggesting that the change in liturgy has improved speech intelligibility. Moreover, the interaural cross-correlation coefficient and early lateral energy fraction were higher and lower, respectively, suggesting that the change in liturgy has made the apparent source width smaller. © 2012 Acoustical Society of America

  18. Preliminary result of rapid solenoid for controlling heavy-ion beam parameters of laser ion source

    DOE PAGES

    Okamura, M.; Sekine, M.; Ikeda, S.; ...

    2015-03-13

    To realize a heavy ion inertial fusion driver, we have studied a possibility of laser ion source (LIS). A LIS can provide high current high brightness heavy ion beams, however it was difficult to manipulate the beam parameters. To overcome the issue, we employed a pulsed solenoid in the plasma drift section and investigated the effect of the solenoid field on singly charged iron beams. The rapid ramping magnetic field could enhance limited time slice of the current and simultaneously the beam emittance changed accordingly. This approach may also useful to realize an ion source for HIF power plant.

  19. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  20. Empirical Scaling Relations of Source Parameters For The Earthquake Swarm 2000 At Novy Kostel (vogtland/nw-bohemia)

    NASA Astrophysics Data System (ADS)

    Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.

    Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.

  1. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity II: Applications

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Lange, Jacob; Healy, James; Carlos, Lousto; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark

    2016-03-01

    In this talk, we apply a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. We illustrate how to use only comparisons between synthetic data and these simulations to reconstruct properties of a synthetic candidate source. We demonstrate using selected examples that we can reconstruct posterior distributions obtained by other Bayesian methods with our sparse grid. We describe how followup simulations can corroborate and improve our understanding of a candidate signal.

  2. Source parameters and tectonic interpretation of recent earthquakes (1995 1997) in the Pannonian basin

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Horváth, Frank; Tóth, László

    2001-01-01

    From January 1995 to December 1997, about 74 earthquakes were located in the Pannonian basin and digitally recorded by a recently established network of seismological stations in Hungary. On reviewing the notable events, about 12 earthquakes were reported as felt with maximum intensity varying between 4 and 6 MSK. The dynamic source parameters of these earthquakes have been derived from P-wave displacement spectra. The displacement source spectra obtained are characterised by relatively small values of corner frequency ( f0) ranging between 2.5 and 10 Hz. The seismic moments change from 1.48×10 20 to 1.3×10 23 dyne cm, stress drops from 0.25 to 76.75 bar, fault length from 0.42 to 1.7 km and relative displacement from 0.05 to 15.35 cm. The estimated source parameters suggest a good agreement with the scaling law for small earthquakes. The small values of stress drops in the studied earthquakes can be attributed to the low strength of crustal materials in the Pannonian basin. However, the values of stress drops are not different for earthquake with thrust or normal faulting focal mechanism solutions. It can be speculated that an increase of the seismic activity in the Pannonian basin can be predicted in the long run because extensional development ceased and structural inversion is in progress. Seismic hazard assessment is a delicate job due to the inadequate knowledge of the seismo-active faults, particularly in the interior part of the Pannonian basin.

  3. Effect of source tuning parameters on the plasma potential of heavy ions in the 18 GHz high temperature superconducting electron cyclotron resonance ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, G.; Mathur, Y.; Kumar, Sarvesh

    2012-03-15

    Plasma potentials for various heavy ions have been measured using the retarding field technique in the 18 GHz high temperature superconducting ECR ion source, PKDELIS [C. Bieth, S. Kantas, P. Sortais, D. Kanjilal, G. Rodrigues, S. Milward, S. Harrison, and R. McMahon, Nucl. Instrum. Methods B 235, 498 (2005); D. Kanjilal, G. Rodrigues, P. Kumar, A. Mandal, A. Roy, C. Bieth, S. Kantas, and P. Sortais, Rev. Sci. Instrum. 77, 03A317 (2006)]. The ion beam extracted from the source is decelerated close to the location of a mesh which is polarized to the source potential and beams having different plasmamore » potentials are measured on a Faraday cup located downstream of the mesh. The influence of various source parameters, viz., RF power, gas pressure, magnetic field, negative dc bias, and gas mixing on the plasma potential is studied. The study helped to find an upper limit of the energy spread of the heavy ions, which can influence the design of the longitudinal optics of the high current injector being developed at the Inter University Accelerator Centre. It is observed that the plasma potentials are decreasing for increasing charge states and a mass effect is clearly observed for the ions with similar operating gas pressures. In the case of gas mixing, it is observed that the plasma potential minimizes at an optimum value of the gas pressure of the mixing gas and the mean charge state maximizes at this value. Details of the measurements carried out as a function of various source parameters and its impact on the longitudinal optics are presented.« less

  4. An improved method to estimate reflectance parameters for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  5. Impact of the vaginal applicator and dummy pellets on the dosimetry parameters of Cs-137 brachytherapy source.

    PubMed

    Sina, Sedigheh; Faghihi, Reza; Meigooni, Ali S; Mehdizadeh, Simin; Mosleh Shirazi, M Amin; Zehtabian, Mehdi

    2011-05-19

    In this study, dose rate distribution around a spherical 137Cs pellet source, from a low-dose-rate (LDR) Selectron remote afterloading system used in gynecological brachytherapy, has been determined using experimental and Monte Carlo simulation techniques. Monte Carlo simulations were performed using MCNP4C code, for a single pellet source in water medium and Plexiglas, and measurements were performed in Plexiglas phantom material using LiF TLD chips. Absolute dose rate distribution and the dosimetric parameters, such as dose rate constant, radial dose functions, and anisotropy functions, were obtained for a single pellet source. In order to investigate the effect of the applicator and surrounding pellets on dosimetric parameters of the source, the simulations were repeated for six different arrangements with a single active source and five non-active pellets inside central metallic tubing of a vaginal cylindrical applicator. In commercial treatment planning systems (TPS), the attenuation effects of the applicator and inactive spacers on total dose are neglected. The results indicate that this effect could lead to overestimation of the calculated F(r,θ), by up to 7% along the longitudinal axis of the applicator, especially beyond the applicator tip. According to the results obtained in this study, in a real situation in treatment of patients using cylindrical vaginal applicator and using several active pellets, there will be a large discrepancy between the result of superposition and Monte Carlo simulations.

  6. Inflatable bladder provides accurate calibration of pressure switch

    NASA Technical Reports Server (NTRS)

    Smith, N. J.

    1965-01-01

    Calibration of a pressure switch is accurately checked by a thin-walled circular bladder. It is placed in the pressure switch and applies force to the switch diaphragm when expanded by an external pressure source. The disturbance to the normal operation of the switch is minimal.

  7. A new lumped-parameter model for flow in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.

    A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less

  8. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  9. A multidisciplinary effort to assign realistic source parameters to models of volcanic ash-cloud transport and dispersion during eruptions

    USGS Publications Warehouse

    Mastin, Larry G.; Guffanti, Marianne C.; Servranckx, R.; Webley, P.; Barsotti, S.; Dean, K.; Durant, A.; Ewert, John W.; Neri, A.; Rose, W.I.; Schneider, David J.; Siebert, L.; Stunder, B.; Swanson, G.; Tupper, A.; Volentik, A.; Waythomas, Christopher F.

    2009-01-01

    During volcanic eruptions, volcanic ash transport and dispersion models (VATDs) are used to forecast the location and movement of ash clouds over hours to days in order to define hazards to aircraft and to communities downwind. Those models use input parameters, called “eruption source parameters”, such as plume height H, mass eruption rate Ṁ, duration D, and the mass fraction m63 of erupted debris finer than about 4ϕ or 63 μm, which can remain in the cloud for many hours or days. Observational constraints on the value of such parameters are frequently unavailable in the first minutes or hours after an eruption is detected. Moreover, observed plume height may change during an eruption, requiring rapid assignment of new parameters. This paper reports on a group effort to improve the accuracy of source parameters used by VATDs in the early hours of an eruption. We do so by first compiling a list of eruptions for which these parameters are well constrained, and then using these data to review and update previously studied parameter relationships. We find that the existing scatter in plots of H versus Ṁ yields an uncertainty within the 50% confidence interval of plus or minus a factor of four in eruption rate for a given plume height. This scatter is not clearly attributable to biases in measurement techniques or to well-recognized processes such as elutriation from pyroclastic flows. Sparse data on total grain-size distribution suggest that the mass fraction of fine debris m63 could vary by nearly two orders of magnitude between small basaltic eruptions (∼ 0.01) and large silicic ones (> 0.5). We classify eleven eruption types; four types each for different sizes of silicic and mafic eruptions; submarine eruptions; “brief” or Vulcanian eruptions; and eruptions that generate co-ignimbrite or co-pyroclastic flow plumes. For each eruption type we assign source parameters. We then assign a characteristic eruption type to each of the world's ∼ 1500

  10. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    PubMed

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote

  11. Revision of the dosimetric parameters of the CSM11 LDR Cs-137 source.

    PubMed

    Otal, Antonio; Martínez-Fernández, Juan Manuel; Granero, Domingo

    2011-03-01

    The clinical use of brachytherapy sources requires the existence of dosimetric data with enough of quality for the proper application of treatments in clinical practice. It has been found that the published data for the low dose rate CSM11 Cs-137 source lacks of smoothness in some regions because the data are too noisy. The purpose of this study was to calculate the dosimetric data for this source in order to provide quality dosimetric improvement of the existing dosimetric data of Ballester et al . [1]. In order to obtain the dose rate distributions Monte Carlo simulations were done using the GEANT4 code. A spherical phantom 40 cm in radius with the Cs-137 source located at the centre of the phantom was used. The results from Monte Carlo simulations were applied to derive AAPM Task Group 43 dosimetric parameters: anisotropy function, radial dose function, air kerma strength and dose rate constant. The dose rate constant obtained was 1.094 ± 0.002 cGy h -1 U -1 . The new calculated data agrees within experimental uncertainties with the existing data of Ballester et al . but without the statistical noise of that study. The obtained data presently fulfills all the requirements of the TG-43U1 update and thus it can be used in clinical practice.

  12. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  13. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  14. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can

  15. Single source photoplethysmograph transducer for local pulse wave velocity measurement.

    PubMed

    Nabeel, P M; Joseph, Jayaraj; Awasthi, Vartika; Sivaprakasam, Mohanasankar

    2016-08-01

    Cuffless evaluation of arterial blood pressure (BP) using pulse wave velocity (PWV) has received attraction over the years. Local PWV based techniques for cuffless BP measurement has more potential in accurate estimation of BP parameters. In this work, we present the design and experimental validation of a novel single-source Photoplethysmograph (PPG) transducer for arterial blood pulse detection and cycle-to-cycle local PWV measurement. The ability of the transducer to continuously measure local PWV was verified using arterial flow phantom as well as by conducting an in-vivo study on 17 volunteers. The single-source PPG transducer could reliably acquire dual blood pulse waveforms, along small artery sections of length less than 28 mm. The transducer was able to perform repeatable measurements of carotid local PWV on multiple subjects with maximum beat-to-beat variation less than 12%. The correlation between measured carotid local PWV and brachial BP parameters were also investigated during the in-vivo study. Study results prove the potential use of newly proposed single-source PPG transducers in continuous cuffless BP measurement systems.

  16. Accurate parameters of the oldest known rocky-exoplanet hosting system: Kepler-10 revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogtmann-Schulz, Alexandra; Hinrup, Brian; Van Eylen, Vincent

    2014-02-01

    Since the discovery of Kepler-10, the system has received considerable interest because it contains a small, rocky planet which orbits the star in less than a day. The system's parameters, announced by the Kepler team and subsequently used in further research, were based on only five months of data. We have reanalyzed this system using the full span of 29 months of Kepler photometric data, and obtained improved information about its star and the planets. A detailed asteroseismic analysis of the extended time series provides a significant improvement on the stellar parameters: not only can we state that Kepler-10 ismore » the oldest known rocky-planet-harboring system at 10.41 ± 1.36 Gyr, but these parameters combined with improved planetary parameters from new transit fits gives us the radius of Kepler-10b to within just 125 km. A new analysis of the full planetary phase curve leads to new estimates on the planetary temperature and albedo, which remain degenerate in the Kepler band. Our modeling suggests that the flux level during the occultation is slightly lower than at the transit wings, which would imply that the nightside of this planet has a non-negligible temperature.« less

  17. Determination of well flat band condition in thin film FDSOI transistors using C-V measurement for accurate parameter extraction

    NASA Astrophysics Data System (ADS)

    Mohamad, B.; Leroux, C.; Reimbold, G.; Ghibaudo, G.

    2018-01-01

    For advanced gate stacks, effective work function (WFeff) and equivalent oxide thickness (EOT) are fundamental parameters for technology optimization. On FDSOI transistors, and contrary to the bulk technologies, while EOT can still be extracted at strong inversion from the typical gate-to-channel capacitance (Cgc), it is no longer the case for WFeff due to the disappearance of an observable flat band condition on capacitance characteristics. In this work, a new experimental method, the Cbg(VBG) characteristic, is proposed in order to extract the well flat band condition (VFB, W). This characteristic enables an accurate and direct evaluation of WFeff. Moreover, using the previous extraction of the gate oxide (tfox), and buried oxide (tbox) from typical capacitance characteristics (Cgc and Cbc), it allows the extraction of the channel thickness (tch). Furthermore, the measurement of the well flat band condition on Cbg(VBG) characteristics for two different Si and SiGe channel also proves the existence of a dipole at the SiGe/SiO2 interface.

  18. Ecological prognosis near intensive acoustic sources

    NASA Astrophysics Data System (ADS)

    Kostarev, Stanislav A.; Makhortykh, Sergey A.; Rybak, Samuil A.

    2002-11-01

    The problem of a wave-field excitation in a ground from a quasiperiodic source, placed on the ground surface or on some depth in soil is investigated. The ecological situation in this case in many respects is determined by quality of the raised vibrations and noise forecast. In the present work the distributed source is modeled by the set of statistically linked compact sources on the surface or in the ground. Changes of parameters of the media along an axis and horizontal heterogeneity of environment are taken into account. Both analytical and numerical approaches are developed. The latter are included in the software package VibraCalc, allowing to calculate distribution of the elastic waves field in a ground from quasilinear sources. Accurate evaluation of vibration levels in buildings from high-intensity underground sources is fulfilled by modeling of the wave propagation in dissipative inhomogeneous elastic media. The model takes into account both bulk (longitudinal and shear) and surface Rayleigh waves. For the verification of the used approach a series of measurements was carried out near the experimental part of monorail road designed in Moscow. Both calculation and measurement results are presented in the paper.

  19. Ecological prognosis near intensive acoustic sources

    NASA Astrophysics Data System (ADS)

    Kostarev, Stanislav A.; Makhortykh, Sergey A.; Rybak, Samuil A.

    2003-04-01

    The problem of a wave field excitation in a ground from a quasi-periodic source, placed on the ground surface or at some depth in soil is investigated. The ecological situation in this case in many respects is determined by quality of the raised vibrations and noise forecast. In the present work the distributed source is modeled by the set of statistically linked compact sources on the surface or in the ground. Changes of parameters of the media along an axis and horizontal heterogeneity of environment are taken into account. Both analytical and numerical approaches are developed. The last are included in software package VibraCalc, allowing to calculate distribution of the elastic waves field in a ground from quasilinear sources. Accurate evaluation of vibration levels in buildings from high intensity under ground sources is fulfilled by modeling of the wave propagation in dissipative inhomogeneous elastic media. The model takes into account both bulk (longitudinal and shear) and surface Rayleigh waves. For the verification of used approach a series of measurements was carried out near the experimental part of monorail road designed in Moscow. Both calculation and measurements results are presented in the paper.

  20. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  1. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  2. Eccentric Black Hole Gravitational-wave Capture Sources in Galactic Nuclei: Distribution of Binary Parameters

    NASA Astrophysics Data System (ADS)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-06-01

    Mergers of binary black holes on eccentric orbits are among the targets for second-generation ground-based gravitational-wave detectors. These sources may commonly form in galactic nuclei due to gravitational-wave emission during close flyby events of single objects. We determine the distributions of initial orbital parameters for a population of these gravitational-wave sources. Our results show that the initial dimensionless pericenter distance systematically decreases with the binary component masses and the mass of the central supermassive black hole, and its distribution depends sensitively on the highest possible black hole mass in the nuclear star cluster. For a multi-mass black hole population with masses between 5 {M}ȯ and 80 {M}ȯ , we find that between ∼43–69% (68–94%) of 30 {M}ȯ –30 {M}ȯ (10 M ⊙–10 M ⊙) sources have an eccentricity greater than 0.1 when the gravitational-wave signal reaches 10 Hz, but less than ∼10% of the sources with binary component masses less than 30 {M}ȯ remain eccentric at this level near the last stable orbit (LSO). The eccentricity at LSO is typically between 0.005–0.05 for the lower-mass BHs, and 0.1–0.2 for the highest-mass BHs. Thus, due to the limited low-frequency sensitivity, the six currently known quasicircular LIGO/Virgo sources could still be compatible with this originally highly eccentric source population. However, at the design sensitivity of these instruments, the measurement of the eccentricity and mass distribution of merger events may be a useful diagnostic to identify the fraction of GW sources formed in this channel.

  3. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  4. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  5. Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.

    PubMed

    Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd

    2018-05-06

    The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this

  6. Physics of compact nonthermal sources. III - Energetic considerations. [electron synchrotron radiation

    NASA Technical Reports Server (NTRS)

    Burbidge, G. R.; Jones, T. W.; Odell, S. L.

    1974-01-01

    The energy content of the compact incoherent electron-synchrotron sources 3C 84, 3C 120, 3C 273, 3C 279, 3C 454.3, CTA 102, 3C 446, PKS 2134+004, VRO 42.22.01 and OJ 287 is calculated on the assumption that the low-frequency turnovers in the radio spectrum are due to self-absorption and that the electron distribution is isotropic. The dependence of the source parameters on various modifications of the standard assumptions is determined. These involve relativistic motions, alternate explanations for the low-frequency turnover, proton-synchrotron radiation, and distance to the source. The canonical interpretation is found to be accurate in many respects; some of the difficulties and ways of dealing with them are discussed in detail.

  7. Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction

    NASA Astrophysics Data System (ADS)

    Allen, A. A.; Chen, X.

    2017-12-01

    The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes

  8. Accurate Nanoscale Crystallography in Real-Space Using Scanning Transmission Electron Microscopy.

    PubMed

    Dycus, J Houston; Harris, Joshua S; Sang, Xiahan; Fancher, Chris M; Findlay, Scott D; Oni, Adedapo A; Chan, Tsung-Ta E; Koch, Carl C; Jones, Jacob L; Allen, Leslie J; Irving, Douglas L; LeBeau, James M

    2015-08-01

    Here, we report reproducible and accurate measurement of crystallographic parameters using scanning transmission electron microscopy. This is made possible by removing drift and residual scan distortion. We demonstrate real-space lattice parameter measurements with <0.1% error for complex-layered chalcogenides Bi2Te3, Bi2Se3, and a Bi2Te2.7Se0.3 nanostructured alloy. Pairing the technique with atomic resolution spectroscopy, we connect local structure with chemistry and bonding. Combining these results with density functional theory, we show that the incorporation of Se into Bi2Te3 causes charge redistribution that anomalously increases the van der Waals gap between building blocks of the layered structure. The results show that atomic resolution imaging with electrons can accurately and robustly quantify crystallography at the nanoscale.

  9. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.

  10. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-03-14

    The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.

  11. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  12. Relationship between strong-motion array parameters and the accuracy of source inversion and physical waves

    USGS Publications Warehouse

    Iida, M.; Miyatake, T.; Shimazaki, K.

    1990-01-01

    We develop general rules for a strong-motion array layout on the basis of our method of applying a prediction analysis to a source inversion scheme. A systematic analysis is done to obtain a relationship between fault-array parameters and the accuracy of a source inversion. Our study of the effects of various physical waves indicates that surface waves at distant stations contribute significantly to the inversion accuracy for the inclined fault plane, whereas only far-field body waves at both small and large distances contribute to the inversion accuracy for the vertical fault, which produces more phase interference. These observations imply the adequacy of the half-space approximation used throughout our present study and suggest rules for actual array designs. -from Authors

  13. An accurate metric for the spacetime around rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  14. Earthquake Source Parameter Estimates for the Charlevoix and Western Quebec Seismic Zones in Eastern Canada

    NASA Astrophysics Data System (ADS)

    Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.

    2017-12-01

    The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension

  15. The role of remediation, natural alkalinity sources and physical stream parameters in stream recovery.

    PubMed

    Kruse, Natalie A; DeRose, Lisa; Korenowsky, Rebekah; Bowman, Jennifer R; Lopez, Dina; Johnson, Kelly; Rankin, Edward

    2013-10-15

    Acid mine drainage (AMD) negatively impacts not only stream chemistry, but also aquatic biology. The ultimate goal of AMD treatment is restoration of the biological community, but that goal is rarely explicit in treatment system design. Hewett Fork in Raccoon Creek Watershed, Ohio, has been impacted by historic coal mining and has been treated with a calcium oxide doser in the headwaters of the watershed since 2004. All of the acidic inputs are isolated to a 1.5 km stretch of stream in the headwaters of the Hewett Fork watershed. The macroinvertebrate and fish communities have begun to recover and it is possible to distinguish three zones downstream of the doser: an impaired zone, a transition zone and a recovered zone. Alkalinity from both the doser and natural sources and physical stream parameters play a role in stream restoration. In Hewett Fork, natural alkaline additions downstream are higher than those from the doser. Both, alkaline additions and stream velocity drive sediment and metal deposition. Metal deposition occurs in several patterns; aluminum tends to deposit in regions of low stream velocity, while iron tends to deposit once sufficient alkalinity is added to the system downstream of mining inputs. The majority of metal deposition occurs upstream of the recovered zone. Both the physical stream parameters and natural alkalinity sources influence biological recovery in treated AMD streams and should be considered in remediation plans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Probing jets from young embedded sources

    NASA Astrophysics Data System (ADS)

    Nisini, Brunella

    2017-08-01

    Jets are intimately related to the process of star formation and disc accretion. Our present knowledge of this key ingredient in protostars mostly relies on observations of optical jets from T Tauri stars, where the original circumstellar envelope has been already cleared out. However, to understand how jets are originally formed and how their properties evolve with time, detailed observations of young accreting protostars, i.e. the class 0/I sources, are mandatory. The study of class0/I jets will be revolutionised by JWST, able to penetrate protostars dusty envelopes with unprecedented sensitivity and resolution. However, complementary information on parameters inferred from lines in different excitation regimes, for at least a representative sample of a few bright sources, is essential for a correct interpretation of the JWST results. Here we propose to observe four prototype bright jets from class0/I sources with the WFC3 in narrow band filters in order to acquire high angular resolution images in the [OI]6300A, [FeII]1.25 and [FeII]1.64um lines. These images will be used to: 1) provide accurate extinction maps of the jets that will be an important archival reference for any future observation on these jets. 2) measure key parameters as the mass flux, the iron abundance and the jet collimation on the hot gas component of the jets. These information will provide an invaluable reference frame for a comparison with similar parameters measured by JWST in a different gas regime. In addition, these observations will allow us to confront the properties of class 0/I jets with those of the more evolved T Tauri stars.

  17. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  18. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  19. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  20. Fast and accurate detection of spread source in large complex networks.

    PubMed

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  1. Numerical modeling of the SNS H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan

    report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less

  2. Green-ampt infiltration parameters in riparian buffers

    Treesearch

    L.M. Stahr; D.E. Eisenhauer; M.J. Helmers; Mike G. Dosskey; T.G. Franti

    2004-01-01

    Riparian buffers can improve surface water quality by filtering contaminants from runoff before they enter streams. Infiltration is an important process in riparian buffers. Computer models are often used to assess the performance of riparian buffers. Accurate prediction of infiltration by these models is dependent upon accurate estimates of infiltration parameters....

  3. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  4. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  5. Source Parameters for Moderate Earthquakes in the Zagros Mountains with Implications for the Depth Extent of Seismicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, A; Brazier, R; Nyblade, A

    2009-02-23

    Six earthquakes within the Zagros Mountains with magnitudes between 4.9 and 5.7 have been studied to determine their source parameters. These events were selected for study because they were reported in open catalogs to have lower crustal or upper mantle source depths and because they occurred within an area of the Zagros Mountains where crustal velocity structure has been constrained by previous studies. Moment tensor inversion of regional broadband waveforms have been combined with forward modeling of depth phases on short period teleseismic waveforms to constrain source depths and moment tensors. Our results show that all six events nucleated withinmore » the upper crust (<11 km depth) and have thrust mechanisms. This finding supports other studies that call into question the existence of lower crustal or mantle events beneath the Zagros Mountains.« less

  6. Efficient Moment-Based Inference of Admixture Parameters and Sources of Gene Flow

    PubMed Central

    Levin, Alex; Reich, David; Patterson, Nick; Berger, Bonnie

    2013-01-01

    The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here, we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for Human Genome Diversity Cell Line Panel individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations—including previously undetected admixture in Sardinians and Basques—involving a proportion of 20–40% ancient northern Eurasian ancestry. PMID:23709261

  7. Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.

    PubMed

    Klotz, Dino; Grave, Daniel A; Rothschild, Avner

    2017-08-09

    The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for

  8. SU-F-J-217: Accurate Dose Volume Parameters Calculation for Revealing Rectum Dose-Toxicity Effect Using Deformable Registration in Cervical Cancer Brachytherapy: A Pilot Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, X; Chen, H; Liao, Y

    Purpose: To study the feasibility of employing deformable registration methods for accurate rectum dose volume parameters calculation and their potentials in revealing rectum dose-toxicity between complication and non-complication cervical cancer patients with brachytherapy treatment. Method and Materials: Data from 60 patients treated with BT including planning images, treatment plans, and follow-up clinical exam were retrospectively collected. Among them, 12 patients complained about hematochezia were further examined with colonoscopy and scored as Grade 1–3 complication (CP). Meanwhile, another 12 non-complication (NCP) patients were selected as a reference group. To seek for potential gains in rectum toxicity prediction when fractional anatomical deformationsmore » are account for, the rectum dose volume parameters D0.1/1/2cc of the selected patients were retrospectively computed by three different approaches: the simple “worstcase scenario” (WS) addition method, an intensity-based deformable image registration (DIR) algorithm-Demons, and a more accurate, recent developed local topology preserved non-rigid point matching algorithm (TOP). Statistical significance of the differences between rectum doses of the CP group and the NCP group were tested by a two-tailed t-test and results were considered to be statistically significant if p < 0.05. Results: For the D0.1cc, no statistical differences are found between the CP and NCP group in all three methods. For the D1cc, dose difference is not detected by the WS method, however, statistical differences between the two groups are observed by both Demons and TOP, and more evident in TOP. For the D2cc, the CP and NCP cases are statistically significance of the difference for all three methods but more pronounced with TOP. Conclusion: In this study, we calculated the rectum D0.1/1/2cc by simple WS addition and two DIR methods and seek for gains in rectum toxicity prediction. The results favor the claim that

  9. Effect of Different Solar Radiation Data Sources on the Variation of Techno-Economic Feasibility of PV Power System

    NASA Astrophysics Data System (ADS)

    Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Aljaafar, A. A.; Kadhim, Mohammed; Sopian, K.

    2017-11-01

    The aim of this study is to evaluate the variation in techno-economic feasibility of PV power system under different data sources of solar radiation. HOMER simulation tool is used to predict the techno-economic feasibility parameters of PV power system in Baghdad city, Iraq located at (33.3128° N, 44.3615° E) as a case study. Four data sources of solar radiation, different annual capacity shortages percentage (0, 2.5, 5, and 7.5), and wide range of daily load profile (10-100 kWh/day) are implemented. The analyzed parameters of the techno-economic feasibility are COE (/kWh), PV array power capacity (kW), PV electrical production (kWh/year), No. of batteries and battery lifetime (year). The main results of the study revealed the followings: (1) solar radiation from different data sources caused observed to significant variation in the values of the techno-economic feasibility parameters; therefore, careful attention must be paid to ensure the use of an accurate solar input data; (2) Average solar radiation from different data sources can be recommended as a reasonable input data; (3) it is observed that as the size and of PV power system increases, the effect of different data sources of solar radiation increases and causes significant variation in the values of the techno-economic feasibility parameters.

  10. Accurate Atmospheric Parameters at Moderate Resolution Using Spectral Indices: Preliminary Application to the MARVELS Survey

    NASA Astrophysics Data System (ADS)

    Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin

    2014-12-01

    Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was

  11. Effect of basic physical parameters to control plasma meniscus and beam halo formation in negative ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, K.; Okuda, S.; Nishioka, S.

    2013-09-14

    Our previous study shows that the curvature of the plasma meniscus causes the beam halo in the negative ion sources: the negative ions extracted from the periphery of the meniscus are over-focused in the extractor due to the electrostatic lens effect, and consequently become the beam halo. In this article, the detail physics of the plasma meniscus and beam halo formation is investigated with two-dimensional particle-in-cell simulation. It is shown that the basic physical parameters such as the H{sup −} extraction voltage and the effective electron confinement time significantly affect the formation of the plasma meniscus and the resultant beammore » halo since the penetration of electric field for negative ion extraction depends on these physical parameters. Especially, the electron confinement time depends on the characteristic time of electron escape along the magnetic field as well as the characteristic time of electron diffusion across the magnetic field. The plasma meniscus penetrates deeply into the source plasma region when the effective electron confinement time is short. In this case, the curvature of the plasma meniscus becomes large, and consequently the fraction of the beam halo increases.« less

  12. Threat Identification Parameters for a Stolen Category 1 Radioactive Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ussery, Larry Eugene; Winkler, Ryan; Myers, Steven Charles

    2016-02-18

    Radioactive sources are used very widely for research and practical applications across medicine, industry, government, universities, and agriculture. The risks associated with these sources vary widely depending on the specific radionuclide used to make the source, source activity, and its chemical and physical form. Sources are categorized by a variety of classification schemes according to the specific risk they pose to the public. This report specifically addresses sources that are classified in the highest category for health risk (category 1). Exposure to an unshielded or lightly shielded category 1 source is extremely dangerous to life and health and can bemore » fatal in relatively short exposure times measured in seconds to minutes. A Category 1 source packaged according to the guidelines dictated by the NRC and U.S. Department of Transportation will typically be surrounded by a large amount of dense shielding material, but will still exhibit a significant dose rate in close proximity. Detection ranges for Category 1 gamma ray sources can extend beyond 5000 ft, but will depend mostly on the source isotope and activity, and the level of shielding around the source. Category 1 sources are easy to detect, but difficult to localize. Dose rates in proximity to an unshielded Category 1 source are extraordinarily high. At distances of a few hundred feet, the functionality of many commonly used handheld instruments will be extremely limited for both the localization and identification of the source. Radiation emitted from a Category 1 source will scatter off of both solid material (ground and buildings) and the atmosphere, a phenomenon known as skyshine. This scattering affects the ability to easily localize and find the source.« less

  13. Accurate acoustic power measurement for low-intensity focused ultrasound using focal axial vibration velocity

    NASA Astrophysics Data System (ADS)

    Tao, Chenyang; Guo, Gepu; Ma, Qingyu; Tu, Juan; Zhang, Dong; Hu, Jimin

    2017-07-01

    Low-intensity focused ultrasound is a form of therapy that can have reversible acoustothermal effects on biological tissue, depending on the exposure parameters. The acoustic power (AP) should be chosen with caution for the sake of safety. To recover the energy of counteracted radial vibrations at the focal point, an accurate AP measurement method using the focal axial vibration velocity (FAVV) is proposed in explicit formulae and is demonstrated experimentally using a laser vibrometer. The experimental APs for two transducers agree well with theoretical calculations and numerical simulations, showing that AP is proportional to the square of the FAVV, with a fixed power gain determined by the physical parameters of the transducers. The favorable results suggest that the FAVV can be used as a valuable parameter for non-contact AP measurement, providing a new strategy for accurate power control for low-intensity focused ultrasound in biomedical engineering.

  14. Deriving stellar parameters with the SME software package

    NASA Astrophysics Data System (ADS)

    Piskunov, N.

    2017-09-01

    Photometry and spectroscopy are complementary tools for deriving accurate stellar parameters. Here I present one of the popular packages for stellar spectroscopy called SME with the emphasis on the latest developments and error assessment for the derived parameters.

  15. Transient analysis of intercalation electrodes for parameter estimation

    NASA Astrophysics Data System (ADS)

    Devan, Sheba

    An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform

  16. Phase contrast imaging simulation and measurements using polychromatic sources with small source-object distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca

    Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less

  17. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  18. Hydrogen atoms can be located accurately and precisely by x-ray crystallography

    PubMed Central

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-01-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  19. Source mechanisms and source parameters of March 10 and September 13, 2007, United Arab Emirates Earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzooqi, Y A; Abou Elenean, K M; Megahed, A S

    2008-02-29

    On March 10 and September 13, 2007 two felt earthquakes with moment magnitudes 3.66 and 3.94 occurred in the eastern part of United Arab Emirates (UAE). The two events were accompanied by few smaller events. Being well recorded by the digital UAE and Oman digital broadband stations, they provide us an excellent opportunity to study the tectonic process and present day stress field acting on this area. In this study, we determined the focal mechanisms of the two main shocks by two methods (polarities of P and regional waveform inversion). Our results indicate a normal faulting mechanism with slight strikemore » slip component for the two studied events along a fault plane trending NNE-SSW in consistent a suggested fault along the extension of the faults bounded Bani Hamid area. The Seismicity distribution between two earthquake sequences reveals a noticeable gap that may be a site of a future event. The source parameters (seismic moment, moment magnitude, fault radius, stress drop and displacement across the fault) were also estimated based on the far field displacement spectra and interpreted in the context of the tectonic setting.« less

  20. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  1. Amplitude loss of sonic waveform due to source coupling to the medium

    NASA Astrophysics Data System (ADS)

    Lee, Myung W.; Waite, William F.

    2007-03-01

    In contrast to hydrate-free sediments, sonic waveforms acquired in gas hydrate-bearing sediments indicate strong amplitude attenuation associated with a sonic velocity increase. The amplitude attenuation increase has been used to quantify pore-space hydrate content by attributing observed attenuation to the hydrate-bearing sediment's intrinsic attenuation. A second attenuation mechanism must be considered, however. Theoretically, energy radiation from sources inside fluid-filled boreholes strongly depends on the elastic parameters of materials surrounding the borehole. It is therefore plausible to interpret amplitude loss in terms of source coupling to the surrounding medium as well as to intrinsic attenuation. Analyses of sonic waveforms from the Mallik 5L-38 well, Northwest Territories, Canada, indicate a significant component of sonic waveform amplitude loss is due to source coupling. Accordingly, all sonic waveform amplitude analyses should include the effect of source coupling to accurately characterize a formation's intrinsic attenuation.

  2. Amplitude loss of sonic waveform due to source coupling to the medium

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2007-01-01

    In contrast to hydrate-free sediments, sonic waveforms acquired in gas hydrate-bearing sediments indicate strong amplitude attenuation associated with a sonic velocity increase. The amplitude attenuation increase has been used to quantify pore-space hydrate content by attributing observed attenuation to the hydrate-bearing sediment's intrinsic attenuation. A second attenuation mechanism must be considered, however. Theoretically, energy radiation from sources inside fluid-filled boreholes strongly depends on the elastic parameters of materials surrounding the borehole. It is therefore plausible to interpret amplitude loss in terms of source coupling to the surrounding medium as well as to intrinsic attenuation. Analyses of sonic waveforms from the Mallik 5L-38 well, Northwest Territories, Canada, indicate a significant component of sonic waveform amplitude loss is due to source coupling. Accordingly, all sonic waveform amplitude analyses should include the effect of source coupling to accurately characterize a formation's intrinsic attenuation.

  3. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  4. Approach to identifying pollutant source and matching flow field

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang

    2013-07-01

    Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.

  5. Molybdenum disulfide and water interaction parameters

    NASA Astrophysics Data System (ADS)

    Heiranian, Mohammad; Wu, Yanbin; Aluru, Narayana R.

    2017-09-01

    Understanding the interaction between water and molybdenum disulfide (MoS2) is of crucial importance to investigate the physics of various applications involving MoS2 and water interfaces. An accurate force field is required to describe water and MoS2 interactions. In this work, water-MoS2 force field parameters are derived using the high-accuracy random phase approximation (RPA) method and validated by comparing to experiments. The parameters obtained from the RPA method result in water-MoS2 interface properties (solid-liquid work of adhesion) in good comparison to the experimental measurements. An accurate description of MoS2-water interaction will facilitate the study of MoS2 in applications such as DNA sequencing, sea water desalination, and power generation.

  6. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  7. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  8. A convenient and accurate wide-range parameter relationship between Buckingham and Morse potential energy functions

    NASA Astrophysics Data System (ADS)

    Lim, Teik-Cheng; Dawson, James Alexander

    2018-05-01

    This study explores the close-range, short-range and long-range relationships between the parameters of the Morse and Buckingham potential energy functions. The results show that the close-range and short-range relationships are valid for bond compression and for very small changes in bond length, respectively, while the long-range relationship is valid for bond stretching. A wide-range relationship is proposed to combine the comparative advantages of the close-range, short-range and long-range parameter relationships. The wide-range relationship is useful for replacing the close-range, short-range and long-range parameter relationships, thereby preventing the undesired effects of potential energy jumps resulting from functional switching between the close-range, short-range and long-range interaction energies.

  9. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  10. A review of the nutritional content and technological parameters of indigenous sources of meat in South America.

    PubMed

    Saadoun, A; Cabrera, M C

    2008-11-01

    Meat yields, proximate compositions, fatty acids compositions and technological parameters are reviewed for species which might be further developed as indigenous sources of meat in South America. These include the alpaca (Lama pacos), capybara (Hydrochoerus hydrochaeris), guanaco (Lama guanicoe), llama (Lama glama), nutria (Myocastor coypus), collared peccary (Tayassu tajacu), greater rhea (Rhea americana), lesser rhea (Rhea pennata), yacare (Caiman crocodilus yacare), tegu lizard (Tupinambis merianae) and green iguana (Iguana iguana).

  11. Source parameters and moment tensor of the ML 4.6 earthquake of November 19, 2011, southwest Sharm El-Sheikh, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, Gad-Elkareem Abdrabou; Omar, Khaled

    2014-06-01

    The southern part of the Gulf of Suez is one of the most seismically active areas in Egypt. On Saturday November 19, 2011 at 07:12:15 (GMT) an earthquake of ML 4.6 occurred in southwest Sharm El-Sheikh, Egypt. The quake has been felt at Sharm El-Sheikh city while no casualties were reported. The instrumental epicenter is located at 27.69°N and 34.06°E. Seismic moment is 1.47 E+22 dyne cm, corresponding to a moment magnitude Mw 4.1. Following a Brune model, the source radius is 101.36 m with an average dislocation of 0.015 cm and a 0.06 MPa stress drop. The source mechanism from a fault plane solution shows a normal fault, the actual fault plane is strike 358, dip 34 and rake -60, the computer code ISOLA is used. Twenty seven small and micro earthquakes (1.5 ⩽ ML ⩽ 4.2) were also recorded by the Egyptian National Seismological Network (ENSN) from the same region. We estimate the source parameters for these earthquakes using displacement spectra. The obtained source parameters include seismic moments of 2.77E+16-1.47E+22 dyne cm, stress drops of 0.0005-0.0617 MPa and relative displacement of 0.0001-0.0152 cm.

  12. Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?

    PubMed

    Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.

  13. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear materialmore » movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.« less

  14. Source parameters and rupture velocities of microearthquakes in western Nagano, Japan, determined using stopping phases

    USGS Publications Warehouse

    Imanishi, K.; Takeo, M.; Ellsworth, W.L.; Ito, H.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.

    2004-01-01

    We use an inversion method based on stopping phases (Imanishi and Takeo, 2002) to estimate the source dimension, ellipticity, and rupture velocity of microearthquakes and investigate the scaling relationships between source parameters. We studied 25 earthquakes, ranging in size from M 1.3 to M 2.7, that occurred between May and August 1999 at the western Nagano prefecture, Japan, which is characterized by a high rate of shallow earthquakes. The data consist of seismograms recorded in an 800-m borehole and at 46 surface and 2 shallow borehole seismic stations whose spacing is a few kilometers. These data were recorded with a sampling frequency of 10 kHz. In particular, the 800-m-borehole data provide a wide frequency bandwidth with greatly reduced ground noise and coda wave amplitudes compared with surface recordings. High-frequency stopping phases appear in the body waves in Hilbert transform pairs and are readily detected on seismograms recorded in the 800-m borehole. After correcting both borehole and surface data for attenuation, we also measure the rise time, which is defined as the interval from the arrival time of the direct wave to the timing of the maximum amplitude in the displacement pulse. The differential time of the stopping phases and the rise times were used to obtain source parameters. We found that several microearthquakes propagated unilaterally, suggesting that all microearthquakes cannot be modeled as a simple circular crack model. Static stress drops range from approximately 0.1 to 2 MPa and do not vary with seismic moment. It seems that the breakdown in stress drop scaling seen in previous studies using surface data is simply an artifact of attenuation in the crust. The average value of rupture velocity does not depend on earthquake size and is similar to those reported for moderate and large earthquakes. It is likely that earthquakes are self-similar over a wide range of earthquake size and that the dynamics of small and large earthquakes are

  15. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  16. Transmission Electron Microscope Measures Lattice Parameters

    NASA Technical Reports Server (NTRS)

    Pike, William T.

    1996-01-01

    Convergent-beam microdiffraction (CBM) in thermionic-emission transmission electron microscope (TEM) is technique for measuring lattice parameters of nanometer-sized specimens of crystalline materials. Lattice parameters determined by use of CBM accurate to within few parts in thousand. Technique developed especially for use in quantifying lattice parameters, and thus strains, in epitaxial mismatched-crystal-lattice multilayer structures in multiple-quantum-well and other advanced semiconductor electronic devices. Ability to determine strains in indivdual layers contributes to understanding of novel electronic behaviors of devices.

  17. HST Imaging of the Eye of Horus, a Double Source Plane Gravitational Lens

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth

    2017-08-01

    Double source plane (DSP) gravitational lenses are extremely rare alignments of a massive lens galaxy with two background sources at distinct redshifts. The presence of two source planes provides important constraints on cosmology and galaxy structure beyond that of typical lens systems by breaking degeneracies between parameters that vary with source redshift. While these systems are extremely valuable, only a handful are known. We have discovered the first DSP lens, the Eye of Horus, in the Hyper Suprime-Cam survey and have confirmed both source redshifts with follow-up spectroscopy, making this the only known DSP lens with both source redshifts measured. Furthermore, the brightest image of the most distant source (S2) is split into a pair of images by a mass component that is undetected in our ground-based data, suggesting the presence of a satellite or line-of-sight galaxy causing this splitting. In order to better understand this system and use it for cosmology and galaxy studies, we must construct an accurate lens model, accounting for the lensing effects of both the main lens galaxy and the intermediate source. Only with deep, high-resolution imaging from HST/ACS can we accurately model this system. Our proposed multiband imaging will clearly separate out the two sources by their distinct colors, allowing us to use their extended surface brightness distributions as constraints on our lens model. These data may also reveal the satellite galaxy responsible for the splitting of the brightest image of S2. With these observations, we will be able to take full advantage of the wealth of information provided by this system.

  18. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KV{sub p} Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, JR; Rivard, MJ

    2014-06-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devisemore » the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.« less

  19. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  20. Minimal spanning tree algorithm for γ-ray source detection in sparse photon images: cluster parameters and selection strategies

    DOE PAGES

    Campana, R.; Bernieri, E.; Massaro, E.; ...

    2013-05-22

    We present that the minimal spanning tree (MST) algorithm is a graph-theoretical cluster-finding method. We previously applied it to γ-ray bidimensional images, showing that it is quite sensitive in finding faint sources. Possible sources are associated with the regions where the photon arrival directions clusterize. MST selects clusters starting from a particular “tree” connecting all the point of the image and performing a cut based on the angular distance between photons, with a number of events higher than a given threshold. In this paper, we show how a further filtering, based on some parameters linked to the cluster properties, canmore » be applied to reduce spurious detections. We find that the most efficient parameter for this secondary selection is the magnitudeM of a cluster, defined as the product of its number of events by its clustering degree. We test the sensitivity of the method by means of simulated and real Fermi-Large Area Telescope (LAT) fields. Our results show that √M is strongly correlated with other statistical significance parameters, derived from a wavelet based algorithm and maximum likelihood (ML) analysis, and that it can be used as a good estimator of statistical significance of MST detections. Finally, we apply the method to a 2-year LAT image at energies higher than 3 GeV, and we show the presence of new clusters, likely associated with BL Lac objects.« less

  1. Multi-level emulation of a volcanic ash transport and dispersion model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen

    2018-01-01

    Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational

  2. On concentrated solute sources in faulted aquifers

    NASA Astrophysics Data System (ADS)

    Robinson, N. I.; Werner, A. D.

    2017-06-01

    Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.

  3. Fast and Accurate Detection of Spread Source in Large Complex Networks

    DTIC Science & Technology

    the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the...important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all

  4. Beyond seismic interferometry: imaging the earth's interior with virtual sources and receivers inside the earth

    NASA Astrophysics Data System (ADS)

    Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.

    2015-12-01

    Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.

  5. Constitutive parameter measurements of lossy materials

    NASA Technical Reports Server (NTRS)

    Dominek, A.; Park, A.

    1989-01-01

    The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.

  6. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  7. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  8. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    PubMed

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  9. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  11. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  12. Influence of ion source configuration and its operation parameters on the target sputtering and implantation process.

    PubMed

    Shalnov, K V; Kukhta, V R; Uemura, K; Ito, Y

    2012-06-01

    In the work, investigation of the features and operation regimes of sputter enhanced ion-plasma source are presented. The source is based on the target sputtering with the dense plasma formed in the crossed electric and magnetic fields. It allows operation with noble or reactive gases at low pressure discharge regimes, and, the resulting ion beam is the mixture of ions from the working gas and sputtering target. Any conductive material, such as metals, alloys, or compounds, can be used as the sputtering target. Effectiveness of target sputtering process with the plasma was investigated dependently on the gun geometry, plasma parameters, and the target bias voltage. With the applied accelerating voltage from 0 to 20 kV, the source can be operated in regimes of thin film deposition, ion-beam mixing, and ion implantation. Multi-component ion beam implantation was applied to α-Fe, which leads to the surface hardness increasing from 2 GPa in the initial condition up to 3.5 GPa in case of combined N(2)-C implantation. Projected range of the implanted elements is up to 20 nm with the implantation energy 20 keV that was obtained with XPS depth profiling.

  13. Spectral Indices of Faint Radio Sources

    NASA Astrophysics Data System (ADS)

    Gim, Hansung B.; Hales, Christopher A.; Momjian, Emmanuel; Yun, Min Su

    2015-01-01

    The significant improvement in bandwidth and the resultant sensitivity offered by the Karl G. Jansky Very Large Array (VLA) allows us to explore the faint radio source population. Through the study of the radio continuum we can explore the spectral indices of these radio sources. Robust radio spectral indices are needed for accurate k-corrections, for example in the study of the radio - far-infrared (FIR) correlation. We present an analysis of measuring spectral indices using two different approaches. In the first, we use the standard wideband imaging algorithm in the data reduction package CASA. In the second, we use a traditional approach of imaging narrower bandwidths to derive the spectral indices. For these, we simulated data to match the observing parameter space of the CHILES Con Pol survey (Hales et al. 2014). We investigate the accuracy and precision of spectral index measurements as a function of signal-to noise, and explore the requirements to reliably probe possible evolution of the radio-FIR correlation in CHILES Con Pol.

  14. Accurate modeling of defects in graphene transport calculations

    NASA Astrophysics Data System (ADS)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  15. THE HYPERFINE STRUCTURE OF THE ROTATIONAL SPECTRUM OF HDO AND ITS EXTENSION TO THE THz REGION: ACCURATE REST FREQUENCIES AND SPECTROSCOPIC PARAMETERS FOR ASTROPHYSICAL OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cazzoli, Gabriele; Lattanzi, Valerio; Puzzarini, Cristina

    2015-06-10

    The rotational spectrum of the mono-deuterated isotopologue of water, HD{sup 16}O, has been investigated in the millimeter- and submillimeter-wave frequency regions, up to 1.6 THz. The Lamb-dip technique has been exploited to obtain sub-Doppler resolution and to resolve the hyperfine (hf) structure due to the deuterium and hydrogen nuclei, thus enabling the accurate determination of the corresponding hf parameters. Their experimental determination has been supported by high-level quantum-chemical calculations. The Lamb-dip measurements have been supplemented by Doppler-limited measurements (weak high-J and high-frequency transitions) in order to extend the predictive capability of the available spectroscopic constants. The possibility of resolving hfmore » splittings in astronomical spectra has been discussed.« less

  16. GBIS (Geodetic Bayesian Inversion Software): Rapid Inversion of InSAR and GNSS Data to Estimate Surface Deformation Source Parameters and Uncertainties

    NASA Astrophysics Data System (ADS)

    Bagnardi, M.; Hooper, A. J.

    2017-12-01

    Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform

  17. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

    EPA Science Inventory

    Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

  18. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and

  19. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that

  20. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  1. Study on Material Parameters Identification of Brain Tissue Considering Uncertainty of Friction Coefficient

    NASA Astrophysics Data System (ADS)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng

    2017-10-01

    Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.

  2. Source parameters of microearthquakes on an interplate asperity off Kamaishi, NE Japan over two earthquake cycles

    USGS Publications Warehouse

    Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira

    2012-01-01

    We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change

  3. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  4. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4

  5. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  6. Accurate sub-millimetre rest frequencies for HOCO+ and DOCO+ ions

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Lattanzi, V.; Laas, J.; Spezzano, S.; Giuliano, B. M.; Prudenzano, D.; Endres, C.; Sipilä, O.; Caselli, P.

    2017-06-01

    Context. HOCO+ is a polar molecule that represents a useful proxy for its parent molecule CO2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO2 and its deuterated variant, DOCO+, still lack an accurate spectroscopic characterisation. Aims: The aim of this work is to extend the study of the ground-state pure rotational spectra of HOCO+ and DOCO+ well into the sub-millimetre region. Methods: Ground-state transitions have been recorded in the laboratory using a frequency-modulation absorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically confined plasma generated in a suitable gas mixture. The ground-state spectra of HOCO+ and DOCO+ have been investigated in the 213-967 GHz frequency range; 94 new rotational transitions have been detected. Additionally, 46 line positions taken from the literature have been accurately remeasured. Results: The newly measured lines have significantly enlarged the available data sets for HOCO+ and DOCO+, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis shows that all HOCO+ lines with Ka ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v5 = 1 vibrationally excited state. This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning. Conclusions: The improved sets of spectroscopic parameters provide enhanced lists of very accurate sub-millimetre rest frequencies of HOCO+ and DOCO+ for astrophysical applications. These new data challenge a recent tentative identification of DOCO+ towards a pre-stellar core. Supplementary tables are only available at the CDS via anonymous ftp to http

  7. SU-G-201-10: Experimental Determination of Modified TG-43 Dosimetry Parameters for the Xoft Axxent® Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simiele, S; Palmer, B; DeWerd, L

    Purpose: The establishment of an air kerma rate standard at NIST for the Xoft Axxent{sup ®} electronic brachytherapy source (Axxent{sup ®} source) motivated the establishment of a modified TG-43 dosimetry formalism. This work measures the modified dosimetry parameters for the Axxent{sup ®} source in the absence of a treatment applicator for implementation in Xoft’s treatment planning system. Methods: The dose-rate conversion coefficient (DRCC), radial dose function (RDF) values, and polar anisotropy (PA) were measured using TLD-100 microcubes with NIST-calibrated sources. The DRCC and RDF measurements were performed in liquid water using an annulus of Virtual Water™ designed to align themore » TLDs at the height of the anode at fixed radii from the source. The PA was measured at several distances from the source in a PMMA phantom. MCNP-determined absorbed dose energy dependence correction factors were used to convert from dose to TLD to dose to liquid water for the DRCC, RDF, and PA measurements. The intrinsic energy dependence correction factor from the work of Pike was used. The AKR was determined using a NIST-calibrated HDR1000 Plus well-type ionization chamber. Results: The DRCC was determined to be 8.6 (cGy/hr)/(µGy/min). The radial dose values were determined to be 1.00 (1cm), 0.60 (2cm), 0.42 (3cm), and 0.32 (4cm), with agreement ranging from (5.7% to 10.9%) from the work of Hiatt et al. 2015 and agreement from (2.8% to 6.8%) with internal MCNP simulations. Conclusion: This work presents a complete dataset of modified TG-43 dosimetry parameters for the Axxent{sup ®} source in the absence of an applicator. Prior to this study a DRCC had not been measured for the Axxent{sup ®} source. This data will be used for calculating dose distributions for patients receiving treatment with the Axxent{sup ®} source in Xoft’s breast balloon and vaginal applicators, and for intraoperative radiotherapy. Sources and partial funding for this work were provided

  8. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  9. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  10. The Slip Behavior and Source Parameters for Spontaneous Slip Events on Rough Faults Subjected to Slow Tectonic Loading

    NASA Astrophysics Data System (ADS)

    Tal, Yuval; Hager, Bradford H.

    2018-02-01

    We study the response to slow tectonic loading of rough faults governed by velocity weakening rate and state friction, using a 2-D plane strain model. Our numerical approach accounts for all stages in the seismic cycle, and in each simulation we model a sequence of two earthquakes or more. We focus on the global behavior of the faults and find that as the roughness amplitude, br, increases and the minimum wavelength of roughness decreases, there is a transition from seismic slip to aseismic slip, in which the load on the fault is released by more slip events but with lower slip rate, lower seismic moment per unit length, M0,1d, and lower average static stress drop on the fault, Δτt. Even larger decreases with roughness are observed when these source parameters are estimated only for the dynamic stage of the rupture. For br ≤ 0.002, the source parameters M0,1d and Δτt decrease mutually and the relationship between Δτt and the average fault strain is similar to that of a smooth fault. For faults with larger values of br that are completely ruptured during the slip events, the average fault strain generally decreases more rapidly with roughness than Δτt.

  11. Study of the operating parameters of a helicon plasma discharge source using PIC-MCC simulation technique

    NASA Astrophysics Data System (ADS)

    Jaafarian, Rokhsare; Ganjovi, Alireza; Etaati, Gholamreza

    2018-01-01

    In this work, a Particle in Cell-Monte Carlo Collision simulation technique is used to study the operating parameters of a typical helicon plasma source. These parameters mainly include the gas pressure, externally applied static magnetic field, the length and radius of the helicon antenna, and the frequency and voltage amplitude of the applied RF power on the helicon antenna. It is shown that, while the strong radial gradient of the formed plasma density in the proximity of the plasma surface is substantially proportional to the energy absorption from the existing Trivelpiece-Gould (TG) modes, the observed high electron temperature in the helicon source at lower static magnetic fields is significant evidence for the energy absorption from the helicon modes. Furthermore, it is found that, at higher gas pressures, both the plasma electron density and temperature are reduced. Besides, it is shown that, at higher static magnetic fields, owing to the enhancement of the energy absorption by the plasma charged species, the plasma electron density is linearly increased. Moreover, it is seen that, at the higher spatial dimensions of the antenna, both the plasma electron density and temperature are reduced. Additionally, while, for the applied frequencies of 13.56 MHz and 27.12 MHz on the helicon antenna, the TG modes appear, for the applied frequency of 18.12 MHz on the helicon antenna, the existence of helicon modes is proved. Moreover, by increasing the applied voltage amplitude on the antenna, the generation of mono-energetic electrons is more probable.

  12. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  13. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  14. Mineralogies and source regions of near-Earth asteroids

    NASA Astrophysics Data System (ADS)

    Dunn, Tasha L.; Burbine, Thomas H.; Bottke, William F.; Clark, John P.

    2013-01-01

    Near-Earth Asteroids (NEAs) offer insight into a size range of objects that are not easily observed in the main asteroid belt. Previous studies on the diversity of the NEA population have relied primarily on modeling and statistical analysis to determine asteroid compositions. Olivine and pyroxene, the dominant minerals in most asteroids, have characteristic absorption features in the visible and near-infrared (VISNIR) wavelengths that can be used to determine their compositions and abundances. However, formulas previously used for deriving compositions do not work very well for ordinary chondrite assemblages. Because two-thirds of NEAs have ordinary chondrite-like spectral parameters, it is essential to determine accurate mineralogies. Here we determine the band area ratios and Band I centers of 72 NEAs with visible and near-infrared spectra and use new calibrations to derive the mineralogies 47 of these NEAs with ordinary chondrite-like spectral parameters. Our results indicate that the majority of NEAs have LL-chondrite mineralogies. This is consistent with results from previous studies but continues to be in conflict with the population of recovered ordinary chondrites, of which H chondrites are the most abundant. To look for potential correlations between asteroid size, composition, and source region, we use a dynamical model to determine the most probable source region of each NEA. Model results indicate that NEAs with LL chondrite mineralogies appear to be preferentially derived from the ν6 secular resonance. This supports the hypothesis that the Flora family, which lies near the ν6 resonance, is the source of the LL chondrites. With the exception of basaltic achondrites, NEAs with non-chondrite spectral parameters are slightly less likely to be derived from the ν6 resonance than NEAs with chondrite-like mineralogies. The population of NEAs with H, L, and LL chondrite mineralogies does not appear to be influenced by size, which would suggest that ordinary

  15. Accurate collision-induced line-coupling parameters for the fundamental band of CO in He - Close coupling and coupled states scattering calculations

    NASA Technical Reports Server (NTRS)

    Green, Sheldon; Boissoles, J.; Boulet, C.

    1988-01-01

    The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.

  16. Agreement of Anterior Segment Parameters Obtained From Swept-Source Fourier-Domain and Time-Domain Anterior Segment Optical Coherence Tomography.

    PubMed

    Chansangpetch, Sunee; Nguyen, Anwell; Mora, Marta; Badr, Mai; He, Mingguang; Porco, Travis C; Lin, Shan C

    2018-03-01

    To assess the interdevice agreement between swept-source Fourier-domain and time-domain anterior segment optical coherence tomography (AS-OCT). Fifty-three eyes from 41 subjects underwent CASIA2 and Visante OCT imaging. One hundred eighty-degree axis images were measured with the built-in two-dimensional analysis software for the swept-source Fourier-domain AS-OCT (CASIA2) and a customized program for the time-domain AS-OCT (Visante OCT). In both devices, we examined the angle opening distance (AOD), trabecular iris space area (TISA), angle recess area (ARA), anterior chamber depth (ACD), anterior chamber width (ACW), and lens vault (LV). Bland-Altman plots and intraclass correlation (ICC) were performed. Orthogonal linear regression assessed any proportional bias. ICC showed strong correlation for LV (0.925) and ACD (0.992) and moderate agreement for ACW (0.801). ICC suggested good agreement for all angle parameters (0.771-0.878) except temporal AOD500 (0.743) and ARA750 (nasal 0.481; temporal 0.481). There was a proportional bias in nasal ARA750 (slope 2.44, 95% confidence interval [CI]: 1.95-3.18), temporal ARA750 (slope 2.57, 95% CI: 2.04-3.40), and nasal TISA500 (slope 1.30, 95% CI: 1.12-1.54). Bland-Altman plots demonstrated in all measured parameters a minimal mean difference between the two devices (-0.089 to 0.063); however, evidence of constant bias was found in nasal AOD250, nasal AOD500, nasal AOD750, nasal ARA750, temporal AOD500, temporal AOD750, temporal ARA750, and ACD. Among the parameters with constant biases, CASIA2 tends to give the larger numbers. Both devices had generally good agreement. However, there were proportional and constant biases in most angle parameters. Thus, it is not recommended that values be used interchangeably.

  17. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  18. Measurement of drill grinding parameters using laser sensor

    NASA Astrophysics Data System (ADS)

    Yanping, Peng; Kumehara, Hiroyuki; Wei, Zhang; Nomura, Takashi

    2005-12-01

    To measure the grinding parameters and geometry parameters accurately for a drill point is essential to its design and reconditioning. In recent years, a number of non-contact coordinate measuring apparatuses, using CCD camera or laser sensors, are developed. But, a lot work is to be done for further improvement. This paper reports another kind of laser coordinate meter. As an example of its application, the method for geometry inspection of the drill flank surface is detailed. Measured data from laser scanning on the flank surface around some points with several 2-dimensional curves are analyzed with mathematical procedure. If one of these curves turns to be a straight line, it must be the generatrix of the grinding cone. Thus, the grinding parameters are determined by a set of three generatrices. Then, the measurement method and data processing procedure are proposed. Its validity is assessed by measuring a sample with given parameters. The point geometry measured agrees well with the known values. In comparison with other methods in the published literature, it is simpler in computation and more accurate in results.

  19. An accurate discontinuous Galerkin method for solving point-source Eikonal equation in 2-D heterogeneous anisotropic media

    NASA Astrophysics Data System (ADS)

    Le Bouteiller, P.; Benjemaa, M.; Métivier, L.; Virieux, J.

    2018-03-01

    Accurate numerical computation of wave traveltimes in heterogeneous media is of major interest for a large range of applications in seismics, such as phase identification, data windowing, traveltime tomography and seismic imaging. A high level of precision is needed for traveltimes and their derivatives in applications which require quantities such as amplitude or take-off angle. Even more challenging is the anisotropic case, where the general Eikonal equation is a quartic in the derivatives of traveltimes. Despite their efficiency on Cartesian meshes, finite-difference solvers are inappropriate when dealing with unstructured meshes and irregular topographies. Moreover, reaching high orders of accuracy generally requires wide stencils and high additional computational load. To go beyond these limitations, we propose a discontinuous-finite-element-based strategy which has the following advantages: (1) the Hamiltonian formalism is general enough for handling the full anisotropic Eikonal equations; (2) the scheme is suitable for any desired high-order formulation or mixing of orders (p-adaptivity); (3) the solver is explicit whatever Hamiltonian is used (no need to find the roots of the quartic); (4) the use of unstructured meshes provides the flexibility for handling complex boundary geometries such as topographies (h-adaptivity) and radiation boundary conditions for mimicking an infinite medium. The point-source factorization principles are extended to this discontinuous Galerkin formulation. Extensive tests in smooth analytical media demonstrate the high accuracy of the method. Simulations in strongly heterogeneous media illustrate the solver robustness to realistic Earth-sciences-oriented applications.

  20. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects

    PubMed Central

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446

  1. SHIELD: FITGALAXY -- A Software Package for Automatic Aperture Photometry of Extended Sources

    NASA Astrophysics Data System (ADS)

    Marshall, Melissa

    2013-01-01

    Determining the parameters of extended sources, such as galaxies, is a common but time-consuming task. Finding a photometric aperture that encompasses the majority of the flux of a source and identifying and excluding contaminating objects is often done by hand - a lengthy and difficult to reproduce process. To make extracting information from large data sets both quick and repeatable, I have developed a program called FITGALAXY, written in IDL. This program uses minimal user input to automatically fit an aperture to, and perform aperture and surface photometry on, an extended source. FITGALAXY also automatically traces the outlines of surface brightness thresholds and creates surface brightness profiles, which can then be used to determine the radial properties of a source. Finally, the program performs automatic masking of contaminating sources. Masks and apertures can be applied to multiple images (regardless of the WCS solution or plate scale) in order to accurately measure the same source at different wavelengths. I present the fluxes, as measured by the program, of a selection of galaxies from the Local Volume Legacy Survey. I then compare these results with the fluxes given by Dale et al. (2009) in order to assess the accuracy of FITGALAXY.

  2. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  3. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  4. pyQms enables universal and accurate quantification of mass spectrometry data.

    PubMed

    Leufken, Johannes; Niehues, Anna; Sarin, L Peter; Wessel, Florian; Hippler, Michael; Leidel, Sebastian A; Fufezan, Christian

    2017-10-01

    Quantitative mass spectrometry (MS) is a key technique in many research areas (1), including proteomics, metabolomics, glycomics, and lipidomics. Because all of the corresponding molecules can be described by chemical formulas, universal quantification tools are highly desirable. Here, we present pyQms, an open-source software for accurate quantification of all types of molecules measurable by MS. pyQms uses isotope pattern matching that offers an accurate quality assessment of all quantifications and the ability to directly incorporate mass spectrometer accuracy. pyQms is, due to its universal design, applicable to every research field, labeling strategy, and acquisition technique. This opens ultimate flexibility for researchers to design experiments employing innovative and hitherto unexplored labeling strategies. Importantly, pyQms performs very well to accurately quantify partially labeled proteomes in large scale and high throughput, the most challenging task for a quantification algorithm. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  5. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  6. Flat-Spectrum Radio Sources as Likely Counterparts of Unidentified INTEGRAL Sources (Research Note)

    NASA Technical Reports Server (NTRS)

    Molina, M.; Landi, R.; Bassani, L.; Malizia, A.; Stephen, J. B.; Bazzano, A.; Bird, A. J.; Gehrels, N.

    2012-01-01

    Many sources in the fourth INTEGRAL/IBIS catalogue are still unidentified since they lack an optical counterpart. An important tool that can help in identifying and classifying these sources is the cross-correlation with radio catalogues, which are very sensitive and positionally accurate. Moreover, the radio properties of a source, such as the spectrum or morphology, could provide further insight into its nature. In particular, flat-spectrum radio sources at high Galactic latitudes are likely to be AGN, possibly associated to a blazar or to the compact core of a radio galaxy. Here we present a small sample of 6 sources extracted from the fourth INTEGRAL/IBIS catalogue that are still unidentified or unclassified, but which are very likely associated with a bright, flat-spectrum radio object. To confirm the association and to study the source X-ray spectral parameters, we performed X-ray follow-up observations with Swift/XRT of all objects. We report in this note the overall results obtained from this search and discuss the nature of each individual INTEGRAL source. We find that 5 of the 6 radio associations are also detected in X-rays; furthermore, in 3 cases they are the only counterpart found. More specifically, IGR J06073-0024 is a flat-spectrum radio quasar at z = 1.08, IGR J14488-4008 is a newly discovered radio galaxy, while IGR J18129-0649 is an AGN of a still unknown type. The nature of two sources (IGR J07225-3810 and IGR J19386-4653) is less well defined, since in both cases we find another X-ray source in the INTEGRAL error circle; nevertheless, the flat-spectrum radio source, likely to be a radio loud AGN, remains a viable and, in fact, a more convincing association in both cases. Only for the last object (IGR J11544-7618) could we not find any convincing counterpart since the radio association is not an X-ray emitter, while the only X-ray source seen in the field is a G star and therefore unlikely to produce the persistent emission seen by INTEGRAL.

  7. Automated Method for Estimating Nutation Time Constant Model Parameters for Spacecraft Spinning on Axis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Calculating an accurate nutation time constant (NTC), or nutation rate of growth, for a spinning upper stage is important for ensuring mission success. Spacecraft nutation, or wobble, is caused by energy dissipation anywhere in the system. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and, if it is in a state of resonance, the NTC can become short enough to violate mission constraints. The Spinning Slosh Test Rig (SSTR) is a forced-motion spin table where fluid dynamic effects in full-scale fuel tanks can be tested in order to obtain key parameters used to calculate the NTC. We accomplish this by independently varying nutation frequency versus the spin rate and measuring force and torque responses on the tank. This method was used to predict parameters for the Genesis, Contour, and Stereo missions, whose tanks were mounted outboard from the spin axis. These parameters are incorporated into a mathematical model that uses mechanical analogs, such as pendulums and rotors, to simulate the force and torque resonances associated with fluid slosh.

  8. Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, Madeline Louise; McMath, Garrett Earl

    Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less

  9. Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries

    DOE PAGES

    Lockhart, Madeline Louise; McMath, Garrett Earl

    2017-10-26

    Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less

  10. Identify source location and release time for pollutants undergoing super-diffusion and decay: Parameter analysis and model evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Sun, HongGuang; Lu, Bingqing; Garrard, Rhiannon; Neupauer, Roseanna M.

    2017-09-01

    Backward models have been applied for four decades by hydrologists to identify the source of pollutants undergoing Fickian diffusion, while analytical tools are not available for source identification of super-diffusive pollutants undergoing decay. This technical note evaluates analytical solutions for the source location and release time of a decaying contaminant undergoing super-diffusion using backward probability density functions (PDFs), where the forward model is the space fractional advection-dispersion equation with decay. Revisit of the well-known MADE-2 tracer test using parameter analysis shows that the peak backward location PDF can predict the tritium source location, while the peak backward travel time PDF underestimates the tracer release time due to the early arrival of tracer particles at the detection well in the maximally skewed, super-diffusive transport. In addition, the first-order decay adds additional skewness toward earlier arrival times in backward travel time PDFs, resulting in a younger release time, although this impact is minimized at the MADE-2 site due to tritium's half-life being relatively longer than the monitoring period. The main conclusion is that, while non-trivial backward techniques are required to identify pollutant source location, the pollutant release time can and should be directly estimated given the speed of the peak resident concentration for super-diffusive pollutants with or without decay.

  11. The Chandra Source Catalog : Automated Source Correlation

    NASA Astrophysics Data System (ADS)

    Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).

  12. The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.

    1992-05-01

    Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.

  13. Urban stream syndrome in a small, lightly developed watershed: a statistical analysis of water chemistry parameters, land use patterns, and natural sources.

    PubMed

    Halstead, Judith A; Kliman, Sabrina; Berheide, Catherine White; Chaucer, Alexander; Cock-Esteb, Alicea

    2014-06-01

    The relationships among land use patterns, geology, soil, and major solute concentrations in stream water for eight tributaries of the Kayaderosseras Creek watershed in Saratoga County, NY, were investigated using Pearson correlation coefficients and multivariate regression analysis. Sub-watersheds corresponding to each sampling site were delineated, and land use patterns were determined for each of the eight sub-watersheds using GIS. Four land use categories (urban development, agriculture, forests, and wetlands) constituted more than 99 % of the land in the sub-watersheds. Eleven water chemistry parameters were highly and positively correlated with each other and urban development. Multivariate regression models indicated urban development was the most powerful predictor for the same eleven parameters (conductivity, TN, TP, NO[Formula: see text], Cl(-), HCO(-)3, SO9(2-)4, Na(+), K(+), Ca(2+), and Mg(2+)). Adjusted R(2) values, ranging from 19 to 91 %, indicated that these models explained an average of 64 % of the variance in these 11 parameters across the samples and 70 % when Mg(2+) was omitted. The more common R (2), ranging from 29 to 92 %, averaged 68 % for these 11 parameters and 72 % when Mg(2+) was omitted. Water quality improved most with forest coverage in stream watersheds. The strong associations between water quality variables and urban development indicated an urban source for these 11 water quality parameters at all eight sampling sites was likely, suggesting that urban stream syndrome can be detected even on a relatively small scale in a lightly developed area. Possible urban sources of Ca(2+) and HCO(-)3 are suggested.

  14. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  15. Impact of Monoenergetic Photon Sources on Nonproliferation Applications Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geddes, Cameron; Ludewigt, Bernhard; Valentine, John

    Near-monoenergetic photon sources (MPSs) have the potential to improve sensitivity at greatly reduced dose in existing applications and enable new capabilities in other applications, particularly where passive signatures do not penetrate or are insufficiently accurate. MPS advantages include the ability to select energy, energy spread, flux, and pulse structures to deliver only the photons needed for the application, while suppressing extraneous dose and background. Some MPSs also offer narrow angular divergence photon beams which can target dose and/or mitigate scattering contributions to image contrast degradation. Current bremsstrahlung photon sources (e.g., linacs and betatrons) produce photons over a broad range ofmore » energies, thus delivering unnecessary dose that in some cases also interferes with the signature to be detected and/or restricts operations. Current sources must be collimated (reducing flux) to generate narrow divergence beams. While MPSs can in principle resolve these issues, they remain at relatively low TRL status. Candidate MPS technologies for nonproliferation applications are now being developed, each of which has different properties (e.g. broad vs. narrow angular divergence). Within each technology, source parameters trade off against one another (e.g. flux vs. energy spread), representing a large operation space. This report describes a broad survey of potential applications, identification of high priority applications, and detailed simulations addressing those priority applications. Requirements were derived for each application, and analysis and simulations were conducted to define MPS parameters that deliver benefit. The results can inform targeting of MPS development to deliver strong impact relative to current systems.« less

  16. Parameter identification of piezoelectric hysteresis model based on improved artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Zhou, Kexin; Zhang, Yeming

    2018-04-01

    The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.

  17. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    NASA Astrophysics Data System (ADS)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  18. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  19. FECAL POLLUTION, PUBLIC HEALTH AND MICROBIAL SOURCE TRACKING

    EPA Science Inventory

    Microbial source tracking (MST) seeks to provide information about sources of fecal water contamination. Without knowledge of sources, it is difficult to accurately model risk assessments, choose effective remediation strategies, or bring chronically polluted waters into complian...

  20. Relation between aerosol sources and meteorological parameters for inhalable atmospheric particles in Sao Paulo City, Brazil

    NASA Astrophysics Data System (ADS)

    Andrade, Fatima; Orsini, Celso; Maenhaut, Willy

    Stacked filter units were used to collect atmospheric particles in separate coarse and fine fractions at the Sao Paulo University Campus during the winter of 1989. The samples were analysed by particle-induced X-ray emission (PIXE) and the data were subjected to an absolute principal component analysis (APCA). Five sources were identified for the fine particles: industrial emissions, which accounted for 13% of the fine mass; emissions from residual oil and diesel, explaining 41%; resuspended soil dust, with 28%; and emissions of Cu and of Mg, together with 18%. For the coarse particles, four sources were identified: soil dust, accounting for 59% of the coarse mass; industrial emissions, with 19%; oil burning, with 8%; and sea salt aerosol, with 14% of the coarse mass. A data set with various meteorological parameters was also subjected to APCA, and a correlation analysis was performed between the meteorological "absolute principal component scores" (APCS) and the APCS from the fine and coarse particle data sets. The soil dust sources for the fine and coarse aerosol were highly correlated with each other and were anticorrelated with the sea breeze component. The industrial components in the fine and coarse size fractions were also highly positively correlated. Furthermore, the industrial component was related with the northeasterly wind direction and, to a lesser extent, with the sea breeze component.

  1. Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea

    We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently

  2. Estimating the effective system dead time parameter for correlated neutron counting

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.

    2017-11-01

    Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it

  3. Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting

    DOE PAGES

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...

    2017-04-29

    We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently

  4. Determination of earthquake source parameters from waveform data for studies of global and regional seismicity

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Chou, T.-A.; Woodhouse, J. H.

    1981-04-01

    It is possible to use the waveform data not only to derive the source mechanism of an earthquake but also to establish the hypocentral coordinates of the `best point source' (the centroid of the stress glut density) at a given frequency. Thus two classical problems of seismology are combined into a single procedure. Given an estimate of the origin time, epicentral coordinates and depth, an initial moment tensor is derived using one of the variations of the method described in detail by Gilbert and Dziewonski (1975). This set of parameters represents the starting values for an iterative procedure in which perturbations to the elements of the moment tensor are found simultaneously with changes in the hypocentral parameters. In general, the method is stable, and convergence rapid. Although the approach is a general one, we present it here in the context of the analysis of long-period body wave data recorded by the instruments of the SRO and ASRO digital network. It appears that the upper magnitude limit of earthquakes that can be processed using this particular approach is between 7.5 and 8.0; the lower limit is, at this time, approximately 5.5, but it could be extended by broadening the passband of the analysis to include energy with periods shorter that 45 s. As there are hundreds of earthquakes each year with magnitudes exceeding 5.5, the seismic source mechanism can now be studied in detail not only for major events but also, for example, for aftershock series. We have investigated the foreshock and several aftershocks of the Sumba earthquake of August 19, 1977; the results show temporal variation of the stress regime in the fault area of the main shock. An area some 150 km to the northwest of the epicenter of the main event became seismically active 49 days later. The sense of the strike-slip mechanism of these events is consistent with the relaxation of the compressive stress in the plate north of the Java trench. Another geophysically interesting result of our

  5. Measuring Parameters of Massive Black Hole Binaries with Partially Aligned Spins

    NASA Technical Reports Server (NTRS)

    Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.

    2011-01-01

    The future space-based gravitational wave detector LISA will be able to measure parameters of coalescing massive black hole binaries, often to extremely high accuracy. Previous work has demonstrated that the black hole spins can have a strong impact on the accuracy of parameter measurement. Relativistic spin-induced precession modulates the waveform in a manner which can break degeneracies between parameters, in principle significantly improving how well they are measured. Recent studies have indicated, however, that spin precession may be weak for an important subset of astrophysical binary black holes: those in which the spins are aligned due to interactions with gas. In this paper, we examine how well a binary's parameters can be measured when its spins are partially aligned and compare results using waveforms that include higher post-Newtonian harmonics to those that are truncated at leading quadrupole order. We find that the weakened precession can substantially degrade parameter estimation, particularly for the "extrinsic" parameters sky position and distance. Absent higher harmonics, LISA typically localizes the sky position of a nearly aligned binary about an order of magnitude less accurately than one for which the spin orientations are random. Our knowledge of a source's sky position will thus be worst for the gas-rich systems which are most likely to produce electromagnetic counterparts. Fortunately, higher harmonics of the waveform can make up for this degradation. By including harmonics beyond the quadrupole in our waveform model, we find that the accuracy with which most of the binary's parameters are measured can be substantially improved. In some cases, the improvement is such that they are measured almost as well as when the binary spins are randomly aligned.

  6. A New Source Biasing Approach in ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevill, Aaron M; Mosher, Scott W

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is

  7. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  8. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  9. FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+

    NASA Astrophysics Data System (ADS)

    Sahoo, B. K.

    2010-12-01

    We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.

  10. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  11. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  12. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  13. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  14. Blind Source Parameters for Performance Evaluation of Despeckling Filters.

    PubMed

    Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images.

  15. Blind Source Parameters for Performance Evaluation of Despeckling Filters

    PubMed Central

    Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618

  16. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  17. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-01

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ɛ(r) is close to one everywhere inside the protein. The Gaussian widths σi of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σi. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  18. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description.

    PubMed

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  19. Acoustic emission source localization based on distance domain signal representation

    NASA Astrophysics Data System (ADS)

    Gawronski, M.; Grabowski, K.; Russek, P.; Staszewski, W. J.; Uhl, T.; Packo, P.

    2016-04-01

    Acoustic emission is a vital non-destructive testing technique and is widely used in industry for damage detection, localisation and characterization. The latter two aspects are particularly challenging, as AE data are typically noisy. What is more, elastic waves generated by an AE event, propagate through a structural path and are significantly distorted. This effect is particularly prominent for thin elastic plates. In these media the dispersion phenomenon results in severe localisation and characterization issues. Traditional Time Difference of Arrival methods for localisation techniques typically fail when signals are highly dispersive. Hence, algorithms capable of dispersion compensation are sought. This paper presents a method based on the Time - Distance Domain Transform for an accurate AE event localisation. The source localisation is found through a minimization problem. The proposed technique focuses on transforming the time signal to the distance domain response, which would be recorded at the source. Only, basic elastic material properties and plate thickness are used in the approach, avoiding arbitrary parameters tuning.

  20. SU-D-19A-05: The Dosimetric Impact of Using Xoft Axxent® Electronic Brachytherapy Source TG-43 Dosimetry Parameters for Treatment with the Xoft 30 Mm Diameter Vaginal Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simiele, S; Micka, J; Culberson, W

    2014-06-01

    Purpose: A full TG-43 dosimetric characterization has not been performed for the Xoft Axxent ® electronic brachytherapy source (Xoft, a subsidiary of iCAD, San Jose, CA) within the Xoft 30 mm diameter vaginal applicator. Currently, dose calculations are performed using the bare-source TG-43 parameters and do not account for the presence of the applicator. This work focuses on determining the difference between the bare-source and sourcein- applicator TG-43 parameters. Both the radial dose function (RDF) and polar anisotropy function (PAF) were computationally determined for the source-in-applicator and bare-source models to determine the impact of using the bare-source dosimetry data. Methods:more » MCNP5 was used to model the source and the Xoft 30 mm diameter vaginal applicator. All simulations were performed using 0.84p and 0.03e cross section libraries. All models were developed based on specifications provided by Xoft. The applicator is made of a proprietary polymer material and simulations were performed using the most conservative chemical composition. An F6 collision-kerma tally was used to determine the RDF and PAF values in water at various dwell positions. The RDF values were normalized to 2.0 cm from the source to accommodate the applicator radius. Source-in-applicator results were compared with bare-source results from this work as well as published baresource results. Results: For a 0 mm source pullback distance, the updated bare-source model and source-in-applicator RDF values differ by 2% at 3 cm and 4% at 5 cm. The largest PAF disagreements were observed at the distal end of the source and applicator with up to 17% disagreement at 2 cm and 8% at 8 cm. The bare-source model had RDF values within 2.6% of the published TG-43 data and PAF results within 7.2% at 2 cm. Conclusion: Results indicate that notable differences exist between the bare-source and source-in-applicator TG-43 simulated parameters. Xoft Inc. provided partial funding for this work.« less

  1. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less

  2. Spectroscopic confirmation of the optical identification of X-ray sources used to determine accurate positions for the anomalous X-ray pulsars 1E 2259+58.6 and 4U 0142+61

    NASA Astrophysics Data System (ADS)

    van den Berg, M.; Verbunt, F.

    2001-03-01

    Optical spectra show that two proposed counterparts for X-ray sources detected near 1E 2259+58.6 are late G stars, and a proposed counterpart for a source near 4U 0142+61 is a dMe star. The X-ray luminosities are as expected for such stars. We thus confirm the optical identification of the three X-ray objects, and thereby the correctness of the accurate positions for 1E 2259+58.6 and 4U 0142+61 based on them. Based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.

  3. Earthquake source parameters along the Hellenic subduction zone and numerical simulations of historical tsunamis in the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Yolsal-Çevikbilen, Seda; Taymaz, Tuncay

    2012-04-01

    We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop

  4. GGOS and the EOP - the key role of SLR for a stable estimation of highly accurate Earth orientation parameters

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael

    2016-04-01

    The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.

  5. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  6. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  7. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    NASA Astrophysics Data System (ADS)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2018-04-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  8. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    NASA Astrophysics Data System (ADS)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2017-09-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  9. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  10. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  11. Cell-accurate optical mapping across the entire developing heart.

    PubMed

    Weber, Michael; Scherf, Nico; Meyer, Alexander M; Panáková, Daniela; Kohl, Peter; Huisken, Jan

    2017-12-29

    Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca 2+ -mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs.

  12. Cell-accurate optical mapping across the entire developing heart

    PubMed Central

    Meyer, Alexander M; Panáková, Daniela; Kohl, Peter

    2017-01-01

    Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca2+-mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs. PMID:29286002

  13. Dietary vitamin E dosage and source affects meat quality parameters in light weight lambs.

    PubMed

    Leal, Leonel N; Beltrán, José A; Alonso, Verónica; Bello, José M; den Hartog, Leo A; Hendriks, Wouter H; Martín-Tereso, Javier

    2018-03-01

    Supra-nutritional vitamin E supplementation is a commonly used approach to delay lipid oxidation and colour deterioration in lamb and beef meat marketed under modified atmosphere packaging. However, these applications lack a precise calibration of dose for the desired effect and, in addition, limited information is available regarding the use of natural vitamin E for this purpose. Three hundred and sixty Rasa Aragonesa lambs were fed diets supplemented with all-rac-α-tocopheryl acetate (250, 500, 1000 and 2000 mg kg -1 compound feed), RRR-α-tocopheryl acetate (125, 250, 500 and 1000 mg kg -1 compound feed) and a basal diet without vitamin E supplementation for 14 days before slaughter at 25.8 ± 1.67 kg body weight. Vitamin E supplementation had no effect (P > 0.05) on average daily weight gain, feed intake and feed efficiency. Display time had larger effects on lipid oxidation, colour stability, myoglobin forms and meat discolouration parameters compared to vitamin E supplementation. However, vitamin E source and dosage significantly extended meat shelf-life as indicated by lipid oxidation, redness, hue angle, metmyoglobin formation, deoxymyoglobin formation, A 580-630 and I SO2 . The quantification of these effects demonstrated that the biological activity value of 1.36 used to distinguish both vitamin E sources is not appropriate for meat quality enhancing properties. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  14. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  15. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  16. Open-source software for collision detection in external beam radiation therapy

    NASA Astrophysics Data System (ADS)

    Suriyakumar, Vinith M.; Xu, Renee; Pinter, Csaba; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Collision detection for external beam radiation therapy (RT) is important for eliminating the need for dryruns that aim to ensure patient safety. Commercial treatment planning systems (TPS) offer this feature but they are expensive and proprietary. Cobalt-60 RT machines are a viable solution to RT practice in low-budget scenarios. However, such clinics are hesitant to invest in these machines due to a lack of affordable treatment planning software. We propose the creation of an open-source room's eye view visualization module with automated collision detection as part of the development of an open-source TPS. METHODS: An openly accessible linac 3D geometry model is sliced into the different components of the treatment machine. The model's movements are based on the International Electrotechnical Commission standard. Automated collision detection is implemented between the treatment machine's components. RESULTS: The room's eye view module was built in C++ as part of SlicerRT, an RT research toolkit built on 3D Slicer. The module was tested using head and neck and prostate RT plans. These tests verified that the module accurately modeled the movements of the treatment machine and radiation beam. Automated collision detection was verified using tests where geometric parameters of the machine's components were changed, demonstrating accurate collision detection. CONCLUSION: Room's eye view visualization and automated collision detection are essential in a Cobalt-60 treatment planning system. Development of these features will advance the creation of an open-source TPS that will potentially help increase the feasibility of adopting Cobalt-60 RT.

  17. A more accurate method using MOVES (Motor Vehicle Emission Simulator) to estimate emission burden for regional-level analysis.

    PubMed

    Liu, Xiaobo

    2015-07-01

    The U.S. Environmental Protection Agency's (EPA) Motor Vehicle Emission Simulator (MOVES) is required by the EPA to replace Mobile 6 as an official on-road emission model. Incorporated with annual vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) vehicle class, MOVES allocates VMT from HPMS to MOVES source (vehicle) types and calculates emission burden by MOVES source type. However, the calculated running emission burden by MOVES source type may be deviated from the actual emission burden because of MOVES source population, specifically the population fraction by MOVES source type in HPMS vehicle class. The deviation is also the result of the use of the universal set of parameters, i.e., relative mileage accumulation rate (relativeMAR), packaged in MOVES default database. This paper presents a novel approach by adjusting the relativeMAR to eliminate the impact of MOVES source population on running exhaust emission and to keep start and evaporative emissions unchanged for both MOVES2010b and MOVES2014. Results from MOVES runs using this approach indicated significant improvements on VMT distribution and emission burden estimation for each MOVES source type. The deviation of VMT by MOVES source type is minimized by using this approach from 12% to less than 0.05% for MOVES2010b and from 50% to less than 0.2% for MOVES2014 except for MOVES source type 53. Source type 53 still remains about 30% variation. The improvement of VMT distribution results in the elimination of emission burden deviation for each MOVES source type. For MOVES2010b, the deviation of emission burdens decreases from -12% for particulate matter less than 2.5 μm (PM2.5) and -9% for carbon monoxide (CO) to less than 0.002%. For MOVES2014, it drops from 80% for CO and 97% for PM2.5 to 0.006%. This approach is developed to more accurately estimate the total emission burdens using EPA's MOVES, both MOVES2010b and MOVES2014, by redistributing vehicle mile traveled (VMT) by

  18. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  19. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop

  20. Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.

    2012-01-01

    The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.

  1. Parameter space of experimental chaotic circuits with high-precision control parameters.

    PubMed

    de Sousa, Francisco F G; Rubinger, Rero M; Sartorelli, José C; Albuquerque, Holokx A; Baptista, Murilo S

    2016-08-01

    We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponents for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.

  2. Location of acoustic emission sources generated by air flow

    PubMed

    Kosel; Grabec; Muzic

    2000-03-01

    The location of continuous acoustic emission sources is a difficult problem of non-destructive testing. This article describes one-dimensional location of continuous acoustic emission sources by using an intelligent locator. The intelligent locator solves a location problem based on learning from examples. To verify whether continuous acoustic emission caused by leakage air flow can be located accurately by the intelligent locator, an experiment on a thin aluminum band was performed. Results show that it is possible to determine an accurate location by using a combination of a cross-correlation function with an appropriate bandpass filter. By using this combination, discrete and continuous acoustic emission sources can be located by using discrete acoustic emission sources for locator learning.

  3. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  5. Source-Monitoring Training Facilitates Preschoolers' Eyewitness Memory Performance.

    ERIC Educational Resources Information Center

    Thierry, Karen L.; Spence, Melanie J.

    2002-01-01

    Investigated whether source-monitoring training would decrease 3- to 4-year-olds' suggestibility. After observing live or video target-events, children received source-monitoring or recognition (control) training. Found that children given source-monitoring training were more accurate than control group children in response to misleading and…

  6. Distillation tray structural parameter study: Phase 1

    NASA Technical Reports Server (NTRS)

    Winter, J. Ronald

    1991-01-01

    The purpose here is to identify the structural parameters (plate thickness, liquid level, beam size, number of beams, tray diameter, etc.) that affect the structural integrity of distillation trays in distillation columns. Once the sensitivity of the trays' dynamic response to these parameters has been established, the designer will be able to use this information to prepare more accurate specifications for the construction of new trays. Information is given on both static and dynamic analysis, modal response, and tray failure details.

  7. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  8. Automatic detection of malaria parasite in blood images using two parameters.

    PubMed

    Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong

    2015-01-01

    Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.

  9. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral

  10. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the

  11. Ionospheric current source modeling and global geomagnetic induction using ground geomagnetic observatory data

    USGS Publications Warehouse

    Sun, Jin; Kelbert, Anna; Egbert, G.D.

    2015-01-01

    Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.

  12. Searching for continuous gravitational wave sources in binary systems

    NASA Astrophysics Data System (ADS)

    Dhurandhar, Sanjeev V.; Vecchio, Alberto

    2001-06-01

    for it; we provide rigorous statements, based on the geometrical formulation of data analysis, concerning the size of the parameter space so that a particular neutron star is a one-filter target. This result is formulated in a completely general form, independent of the particular kind of source, and can be applied to any class of signals whose waveform can be accurately predicted. We apply our theoretical analysis to Sco X-1 and the 44 neutron stars with binary companions which are listed in the most updated version of the radio pulsar catalog. For up to ~3 h of coherent integration time, Sco X-1 will need at most a few templates; for 1 week integration time the number of templates rapidly rises to ~=5×106. This is due to the rather poor measurements available today of the projected semi-major axis and the orbital phase of the neutron star. If, however, the same search is to be carried out with only a few filters, then more refined measurements of the orbital parameters are called for-an improvement of about three orders of magnitude in the accuracy is required. Further, we show that the five NS's (radio pulsars) for which the upper limits on the signal strength are highest require no more than a few templates each and can be targeted very cheaply in terms of CPU time. Blind searches of the parameter space of orbital elements are, in general, completely un-affordable for present or near future dedicated computational resources, when the coherent integration time is of the order of the orbital period or longer. For wide binary systems, when the observation covers only a fraction of one orbit, the computational burden reduces enormously, and becomes affordable for a significant region of the parameter space.

  13. RICO: A NEW APPROACH FOR FAST AND ACCURATE REPRESENTATION OF THE COSMOLOGICAL RECOMBINATION HISTORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendt, W. A.; Wandelt, B. D.; Chluba, J.

    2009-04-15

    We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z {approx}< 900), much of the net change in the ionization fraction can be captured by loweringmore » the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in {approx}10 ms, a speedup of at least 10{sup 6} over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.« less

  14. Single frequency stable VCSEL as a compact source for interferometry and vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudzik, Grzegorz; Rzepka, Janusz

    2010-05-28

    Developing an innovative PS-DAVLL (Polarization Switching DAVLL) method of frequency stabilization, which used a ferroelectric liquid crystal cell as quarter wave plate, rubidium cell and developed ultra-stable current source, allowed to obtain a frequency stability of 10{sup -9}(frequency reproducibility of 1,2centre dot10{sup -8}) and reductions in external dimensions of laser source. The total power consumption is only 1,5 Watt. Because stabilization method used in the frequency standard is insensitive to vibration, the semiconductor laser interferometer was built for measuring range over one meter, which can also be used in industry for the accurate measurement of displacements with an accuracy ofmore » 1[mum/m]. Measurements of the VCSEL laser parameters are important from the standpoint of its use in laser interferometry or vibrometry, like narrow emission line DELTAnu{sub FWHM} = 70[MHz] equivalent of this laser type and stability of linear polarization of VCSEL laser. The undoubted advantage of the constructed laser source is the lack of mode-hopping effect during continuous work of VCSEL.« less

  15. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-09-01

    Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.

  16. Tuning of Kalman filter parameters via genetic algorithm for state-of-charge estimation in battery management system.

    PubMed

    Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.

  17. Tuning of Kalman Filter Parameters via Genetic Algorithm for State-of-Charge Estimation in Battery Management System

    PubMed Central

    Ting, T. O.; Lim, Eng Gee

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041

  18. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  19. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Gair, Jonathan R.

    2014-12-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  20. The time-lapse AVO difference inversion for changes in reservoir parameters

    NASA Astrophysics Data System (ADS)

    Longxiao, Zhi; Hanming, Gu; Yan, Li

    2016-12-01

    The result of conventional time-lapse seismic processing is the difference between the amplitude and the post-stack seismic data. Although stack processing can improve the signal-to-noise ratio (SNR) of seismic data, it also causes a considerable loss of important information about the amplitude changes and only gives the qualitative interpretation. To predict the changes in reservoir fluid more precisely and accurately, we also need the quantitative information of the reservoir. To achieve this aim, we develop the method of time-lapse AVO (amplitude versus offset) difference inversion. For the inversion of reservoir changes in elastic parameters, we apply the Gardner equation as the constraint and convert the three-parameter inversion of elastic parameter changes into a two-parameter inversion to make the inversion more stable. For the inversion of variations in the reservoir parameters, we infer the relation between the difference of the reflection coefficient and variations in the reservoir parameters, and then invert reservoir parameter changes directly. The results of the theoretical modeling computation and practical application show that our method can estimate the relative variations in reservoir density, P-wave and S-wave velocity, calculate reservoir changes in water saturation and effective pressure accurately, and then provide reference for the rational exploitation of the reservoir.

  1. Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model

    NASA Astrophysics Data System (ADS)

    Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.

    2000-02-01

    The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.

  2. Improving land surface parameter retrieval by integrating plant traits priors in the MULTIPLY data assimilation platform

    NASA Astrophysics Data System (ADS)

    Corbin, A. E.; Timmermans, J.; Hauser, L.; Bodegom, P. V.; Soudzilovskaia, N. A.

    2017-12-01

    There is a growing demand for accurate land surface parameterization from remote sensing (RS) observations. This demand has not been satisfied, because most estimation schemes apply 1) a single-sensor single-scale approach, and 2) require specific key-variables to be `guessed'. This is because of the relevant observational information required to accurately retrieve parameters of interest. Consequently, many schemes assume specific variables to be constant or not present; subsequently leading to more uncertainty. In this aspect, the MULTIscale SENTINEL land surface information retrieval Platform (MULTIPLY) was created. MULTIPLY couples a variety of RS sources with Radiative Transfer Models (RTM) over varying spectral ranges using data-assimilation to estimate geophysical parameters. In addition, MULTIPLY also uses prior information about the land surface to constrain the retrieval problem. This research aims to improve the retrieval of plant biophysical parameters through the use of priors of biophysical parameters/plant traits. Of particular interest are traits (physical, morphological or chemical trait) affecting individual performance and fitness of species. Plant traits that are able to be retrieved via RS and with RTMs include traits such as leaf-pigments, leaf water, LAI, phenols, C/N, etc. In-situ data for plant traits that are retrievable via RS techniques were collected for a meta-analysis from databases such as TRY, Ecosis, and individual collaborators. Of particular interest are the following traits: chlorophyll, carotenoids, anthocyanins, phenols, leaf water, and LAI. ANOVA statistics were generated for each traits according to species, plant functional groups (such as evergreens, grasses, etc.), and the trait itself. Afterwards, traits were also compared using covariance matrices. Using these as priors, MULTIPLY was is used to retrieve several plant traits in two validation sites in the Netherlands (Speulderbos) and in Finland (Sodankylä). Initial

  3. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  4. Hyperspectral imaging-based spatially-resolved technique for accurate measurement of the optical properties of horticultural products

    NASA Astrophysics Data System (ADS)

    Cen, Haiyan

    Hyperspectral imaging-based spatially-resolved technique is promising for determining the optical properties and quality attributes of horticultural and food products. However, considerable challenges still exist for accurate determination of spectral absorption and scattering properties from intact horticultural products. The objective of this research was, therefore, to develop and optimize hyperspectral imaging-based spatially-resolved technique for accurate measurement of the optical properties of horticultural products. Monte Carlo simulations and experiments for model samples of known optical properties were performed to optimize the inverse algorithm of a single-layer diffusion model and the optical designs, for extracting the absorption (micro a) and reduced scattering (micros') coefficients from spatially-resolved reflectance profiles. The logarithm and integral data transformation and the relative weighting methods were found to greatly improve the parameter estimation accuracy with the relative errors of 10.4%, 10.7%, and 11.4% for micro a, and 6.6%, 7.0%, and 7.1% for micros', respectively. More accurate measurements of optical properties were obtained when the light beam was of Gaussian type with the diameter of less than 1 mm, and the minimum and maximum source-detector distances were 1.5 mm and 10--20 transport mean free paths, respectively. An optical property measuring prototype was built, based on the optimization results, and evaluated for automatic measurement of absorption and reduced scattering coefficients for the wavelengths of 500--1,000 nm. The instrument was used to measure the optical properties, and assess quality/maturity, of 500 'Redstar' peaches and 1039 'Golden Delicious' (GD) and 1040 'Delicious' (RD) apples. A separate study was also conducted on confocal laser scanning and scanning electron microscopic image analysis and compression test of fruit tissue specimens to measure the structural and mechanical properties of 'Golden

  5. New insights on active fault geometries in the Mentawai region of Sumatra, Indonesia, from broadband waveform modeling of earthquake source parameters

    NASA Astrophysics Data System (ADS)

    WANG, X.; Wei, S.; Bradley, K. E.

    2017-12-01

    Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike

  6. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  7. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  9. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  10. VizieR Online Data Catalog: Multiwavelength photometry of CDFS X-ray sources (Brusa+, 2009)

    NASA Astrophysics Data System (ADS)

    Brusa, M.; Fiore, F.; Santini, P.; Grazian, A.; Comastri, A.; Zamorani, G.; Hasinger, G.; Merloni, A.; Civano, F.; Fontana, A.; Mainieri, V.

    2010-03-01

    The co-evolution of host galaxies and the active black holes which reside in their centre is one of the most important topics in modern observational cosmology. Here we present a study of the properties of obscured active galactic nuclei (AGN) detected in the CDFS 1 Ms observation and their host galaxies. We limited the analysis to the MUSIC area, for which deep K-band observations obtained with ISAAC@VLT are available, ensuring accurate identifications of the counterparts of the X-ray sources as well as reliable determination of photometric redshifts and galaxy parameters, such as stellar masses and star formation rates. In particular, we: 1) refined the X-ray/infrared/optical association of 179 sources in the MUSIC area detected in the Chandra observation; 2) studied the host galaxies observed and rest frame colors and properties. (2 data files).

  11. Evaluation of interpolation methods for TG-43 dosimetric parameters based on comparison with Monte Carlo data for high-energy brachytherapy sources.

    PubMed

    Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark

    2010-03-01

    The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.

  12. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  13. Source origin and parameters influencing levels of heavy metals in TSP, in an industrial background area of Southern Italy

    NASA Astrophysics Data System (ADS)

    Ragosta, Maria; Caggiano, Rosa; D'Emilio, Mariagrazia; Macchiato, Maria

    In this paper, we investigate the relationships among atmospheric concentration of trace elements and some meteorological parameters. In particular, the effects of different meteorological conditions on heavy metal levels are interpreted by means of a multivariate statistical approach. The analysed variables were measured during a monitoring survey that started in 1997, and this survey was carried out in order to evaluate the atmospheric concentrations of heavy metals in the industrial area of Tito Scalo (Basilicata Region, Southern Italy). Here we present and analyse the data set collected from 1997 to 1999. The data set includes daily concentrations of total suspended particulates (TSP), daily concentrations of eight metals (Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn) in TSP and daily meteoclimatic data (temperature, rainfall, speed and wind directions). Both the concentration level and the occurrence of peak concentration events are consistent with the characteristics of the study area: abundant small and medium industrial plants in a mountainous and unpolluted zone. Regarding the origin of sources of heavy metals in TSP, the statistical procedure allows us to identify three profiles: SP 1 and SP 2 related to industrial sources and SP 3 related to other sources (natural and/or anthropogenic). In particular, taking into account the effect of different meteorological conditions, we are able to distinguish the contribution of different fractions of the same metal in the detected source profiles.

  14. Photospheric properties and fundamental parameters of M dwarfs

    NASA Astrophysics Data System (ADS)

    Rajpurohit, A. S.; Allard, F.; Teixeira, G. D. C.; Homeier, D.; Rajpurohit, S.; Mousis, O.

    2018-02-01

    Context. M dwarfs are an important source of information when studying and probing the lower end of the Hertzsprung-Russell (HR) diagram, down to the hydrogen-burning limit. Being the most numerous and oldest stars in the galaxy, they carry fundamental information on its chemical history. The presence of molecules in their atmospheres, along with various condensed species, complicates our understanding of their physical properties and thus makes the determination of their fundamental stellar parameters more challenging and difficult. Aim. The aim of this study is to perform a detailed spectroscopic analysis of the high-resolution H-band spectra of M dwarfs in order to determine their fundamental stellar parameters and to validate atmospheric models. The present study will also help us to understand various processes, including dust formation and depletion of metals onto dust grains in M dwarf atmospheres. The high spectral resolution also provides a unique opportunity to constrain other chemical and physical processes that occur in a cool atmosphere. Methods: The high-resolution APOGEE spectra of M dwarfs, covering the entire H-band, provide a unique opportunity to measure their fundamental parameters. We have performed a detailed spectral synthesis by comparing these high-resolution H-band spectra to that of the most recent BT-Settl model and have obtained fundamental parameters such as effective temperature, surface gravity, and metallicity (Teff, log g, and [Fe/H]), respectively. Results: We have determined Teff, log g, and [Fe/H] for 45 M dwarfs using high-resolution H-band spectra. The derived Teff for the sample ranges from 3100 to 3900 K, values of log g lie in the range 4.5 ≤ log g ≤ 5.5, and the resulting metallicities lie in the range ‑0.5 ≤ [Fe/H] ≤ +0.5. We have explored systematic differences between effective temperature and metallicity calibrations with other studies using the same sample of M dwarfs. We have also shown that the stellar

  15. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  16. Accurate and reproducible measurements of RhoA activation in small samples of primary cells.

    PubMed

    Nini, Lylia; Dagnino, Lina

    2010-03-01

    Rho GTPase activation is essential in a wide variety of cellular processes. Measurement of Rho GTPase activation is difficult with limited material, such as tissues or primary cells that exhibit stringent culture requirements for growth and survival. We defined parameters to accurately and reproducibly measure RhoA activation (i.e., RhoA-GTP) in cultured primary keratinocytes in response to serum and growth factor stimulation using enzyme-linked immunosorbent assay (ELISA)-based G-LISA assays. We also established conditions that minimize RhoA-GTP in unstimulated cells without affecting viability, allowing accurate measurements of RhoA activation on stimulation or induction of exogenous GTPase expression. Copyright 2009 Elsevier Inc. All rights reserved.

  17. Generating Accurate Urban Area Maps from Nighttime Satellite (DMSP/OLS) Data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc; Lawrence, William; Elvidge, Christopher

    2000-01-01

    There has been an increasing interest by the international research community to use the nighttime acquired "city-lights" data sets collected by the US Defense Meteorological Satellite Program's Operational Linescan system to study issues relative to urbanization. Many researchers are interested in using these data to estimate human demographic parameters over large areas and then characterize the interactions between urban development , natural ecosystems, and other aspects of the human enterprise. Many of these attempts rely on an ability to accurately identify urbanized area. However, beyond the simple determination of the loci of human activity, using these data to generate accurate estimates of urbanized area can be problematic. Sensor blooming and registration error can cause large overestimates of urban land based on a simple measure of lit area from the raw data. We discuss these issues, show results of an attempt to do a historical urban growth model in Egypt, and then describe a few basic processing techniques that use geo-spatial analysis to threshold the DMSP data to accurately estimate urbanized areas. Algorithm results are shown for the United States and an application to use the data to estimate the impact of urban sprawl on sustainable agriculture in the US and China is described.

  18. AQUATOX Data Sources Documents

    EPA Pesticide Factsheets

    Contains the data sources for parameter values of the AQUATOX model including: a bibliography for the AQUATOX data libraries and the compendia of parameter values for US Army Corps of Engineers models.

  19. SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R E; Mayeda, K; Walter, W R

    2008-07-08

    The objectives of this study are to improve low-magnitude (concentrating on M2.5-5) regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge at small magnitudes (i.e., m{sub b}more » < {approx} 4.0) is poorly resolved, and source scaling remains a subject of on-going debate in the earthquake seismology community. Recently there have been a number of empirical studies suggesting scaling of micro-earthquakes is non-self-similar, yet there are an equal number of compelling studies that would suggest otherwise. It is not clear whether different studies obtain different results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods that both make use of empirical Green's function (EGF) earthquakes to remove path effects. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But finding well recorded earthquakes with 'perfect' EGF events for direct wave analysis is difficult, limits the number of

  20. Parameter optimization for surface flux transport models

    NASA Astrophysics Data System (ADS)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  1. Accurate evaluation for the biofilm-activated sludge reactor using graphical techniques

    NASA Astrophysics Data System (ADS)

    Fouad, Moharram; Bhargava, Renu

    2018-05-01

    A complete graphical solution is obtained for the completely mixed biofilm-activated sludge reactor (hybrid reactor). The solution consists of a series of curves deduced from the principal equations of the hybrid system after converting them in dimensionless form. The curves estimate the basic parameters of the hybrid system such as suspended biomass concentration, sludge residence time, wasted mass of sludge, and food to biomass ratio. All of these parameters can be expressed as functions of hydraulic retention time, influent substrate concentration, substrate concentration in the bulk, stagnant liquid layer thickness, and the minimum substrate concentration which can maintain the biofilm growth in addition to the basic kinetics of the activated sludge process in which all these variables are expressed in a dimensionless form. Compared to other solutions of such system these curves are simple, easy to use, and provide an accurate tool for analyzing such system based on fundamental principles. Further, these curves may be used as a quick tool to get the effect of variables change on the other parameters and the whole system.

  2. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.

    PubMed

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-04

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  3. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    PubMed Central

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  4. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  5. Accurate detection of hierarchical communities in complex networks based on nonlinear dynamical evolution

    NASA Astrophysics Data System (ADS)

    Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng

    2018-04-01

    One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community

  6. Fundamental Parameters of Main-Sequence Stars in an Instant with Machine Learning

    NASA Astrophysics Data System (ADS)

    Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabeth

    2016-10-01

    Owing to the remarkable photometric precision of space observatories like Kepler, stellar and planetary systems beyond our own are now being characterized en masse for the first time. These characterizations are pivotal for endeavors such as searching for Earth-like planets and solar twins, understanding the mechanisms that govern stellar evolution, and tracing the dynamics of our Galaxy. The volume of data that is becoming available, however, brings with it the need to process this information accurately and rapidly. While existing methods can constrain fundamental stellar parameters such as ages, masses, and radii from these observations, they require substantial computational effort to do so. We develop a method based on machine learning for rapidly estimating fundamental parameters of main-sequence solar-like stars from classical and asteroseismic observations. We first demonstrate this method on a hare-and-hound exercise and then apply it to the Sun, 16 Cyg A and B, and 34 planet-hosting candidates that have been observed by the Kepler spacecraft. We find that our estimates and their associated uncertainties are comparable to the results of other methods, but with the additional benefit of being able to explore many more stellar parameters while using much less computation time. We furthermore use this method to present evidence for an empirical diffusion-mass relation. Our method is open source and freely available for the community to use.6

  7. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase

  8. Determination of source parameters of the 2017 Mount Agung volcanic earthquake from moment-tensor inversion method using local broadband seismic waveforms

    NASA Astrophysics Data System (ADS)

    Madlazim; Prastowo, T.; Supardiyono; Hardy, T.

    2018-03-01

    Monitoring of volcanoes has been an important issue for many purposes, particularly hazard mitigation. With regard to this, the aims of the present work are to estimate and analyse source parameters of a volcanic earthquake driven by recent magmatic events of Mount Agung in Bali island that occurred on September 28, 2017. The broadband seismogram data consisting of 3 local component waveforms were recorded by the IA network of 5 seismic stations: SRBI, DNP, BYJI, JAGI, and TWSI (managed by BMKG). These land-based observatories covered a full 4-quadrant region surrounding the epicenter. The methods used in the present study were seismic moment-tensor inversions, where the data were all analyzed to extract the parameters, namely moment magnitude, type of a volcanic earthquake indicated by percentages of seismic components: compensated linear vector dipole (CLVD), isotropic (ISO), double-couple (DC), and source depth. The results are given in the forms of variance reduction of 65%, a magnitude of M W 3.6, a CLVD of 40%, an ISO of 33%, a DC of 27% and a centroid-depth of 9.7 km. These suggest that the unusual earthquake was dominated by a vertical CLVD component, implying the dominance of uplift motion of magmatic fluid flow inside the volcano.

  9. Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment

    NASA Astrophysics Data System (ADS)

    Melgar Moctezuma, Diego

    This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.

  10. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  11. Parameter Estimation for Viscoplastic Material Modeling

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.

    1997-01-01

    A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.

  12. Key parameters for behaviour related to source separation of household organic waste: A case study in Hanoi, Vietnam.

    PubMed

    Kawai, Kosuke; Huong, Luong Thi Mai

    2017-03-01

    Proper management of food waste, a major component of municipal solid waste (MSW), is needed, especially in developing Asian countries where most MSW is disposed of in landfill sites without any pretreatment. Source separation can contribute to solving problems derived from the disposal of food waste. An organic waste source separation and collection programme has been operated in model areas in Hanoi, Vietnam, since 2007. This study proposed three key parameters (participation rate, proper separation rate and proper discharge rate) for behaviour related to source separation of household organic waste, and monitored the progress of the programme based on the physical composition of household waste sampled from 558 households in model programme areas of Hanoi. The results showed that 13.8% of 558 households separated organic waste, and 33.0% discharged mixed (unseparated) waste improperly. About 41.5% (by weight) of the waste collected as organic waste was contaminated by inorganic waste, and one-third of the waste disposed of as organic waste by separators was inorganic waste. We proposed six hypothetical future household behaviour scenarios to help local officials identify a final or midterm goal for the programme. We also suggested that the city government take further actions to increase the number of people participating in separating organic waste, improve the accuracy of separation and prevent non-separators from discharging mixed waste improperly.

  13. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  14. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  15. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  16. Source levels and call parameters of harbor seal breeding vocalizations near a terrestrial haulout site in Glacier Bay National Park and Preserve.

    PubMed

    Matthews, Leanna P; Parks, Susan E; Fournet, Michelle E H; Gabriele, Christine M; Womble, Jamie N; Klinck, Holger

    2017-03-01

    Source levels of harbor seal breeding vocalizations were estimated using a three-element planar hydrophone array near the Beardslee Islands in Glacier Bay National Park and Preserve, Alaska. The average source level for these calls was 144 dB RMS re 1 μPa at 1 m in the 40-500 Hz frequency band. Source level estimates ranged from 129 to 149 dB RMS re 1 μPa. Four call parameters, including minimum frequency, peak frequency, total duration, and pulse duration, were also measured. These measurements indicated that breeding vocalizations of harbor seals near the Beardslee Islands of Glacier Bay National Park are similar in duration (average total duration: 4.8 s, average pulse duration: 3.0 s) to previously reported values from other populations, but are 170-220 Hz lower in average minimum frequency (78 Hz).

  17. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  18. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  19. The circuit parameters measurement of the SABALAN-I plasma focus facility and comparison with Lee Model

    NASA Astrophysics Data System (ADS)

    Karimi, F. S.; Saviz, S.; Ghoranneviss, M.; Salem, M. K.; Aghamir, F. M.

    The circuit parameters are investigated in a Mather-type plasma focus device. The experiments are performed in the SABALAN-I plasma focus facility (2 kJ, 20 kV, 10 μF). A 12-turn Rogowski coil is built and used to measure the time derivative of discharge current (dI/dt). The high pressure test has been performed in this work, as alternative technique to short circuit test to determine the machine circuit parameters and calibration factor of the Rogowski coil. The operating parameters are calculated by two methods and the results show that the relative error of determined parameters by method I, are very low in comparison to method II. Thus the method I produces more accurate results than method II. The high pressure test is operated with this assumption that no plasma motion and the circuit parameters may be estimated using R-L-C theory given that C0 is known. However, for a plasma focus, even at highest permissible pressure it is found that there is significant motion, so that estimated circuit parameters not accurate. So the Lee Model code is used in short circuit mode to generate the computed current trace for fitting to the current waveform was integrated from current derivative signal taken with Rogowski coil. Hence, the dynamics of plasma is accounted for into the estimation and the static bank parameters are determined accurately.

  20. SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R E; Mayeda, K; Walter, W R

    2007-07-10

    The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and

  1. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  2. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  3. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  4. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  5. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  6. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  7. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  8. Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates.

    PubMed

    Zimmermann, Jan; Vazquez, Yuriria; Glimcher, Paul W; Pesaran, Bijan; Louie, Kenway

    2016-09-01

    Video-based noninvasive eye trackers are an extremely useful tool for many areas of research. Many open-source eye trackers are available but current open-source systems are not designed to track eye movements with the temporal resolution required to investigate the mechanisms of oculomotor behavior. Commercial systems are available but employ closed source hardware and software and are relatively expensive, limiting wide-spread use. Here we present Oculomatic, an open-source software and modular hardware solution to eye tracking for use in humans and non-human primates. Oculomatic features high temporal resolution (up to 600Hz), real-time eye tracking with high spatial accuracy (<0.5°), and low system latency (∼1.8ms, 0.32ms STD) at a relatively low-cost. Oculomatic compares favorably to our existing scleral search-coil system while being fully non invasive. We propose that Oculomatic can support a wide range of research into the properties and neural mechanisms of oculomotor behavior. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. 40 CFR 60.3043 - What operating parameter monitoring equipment must I install, and what operating parameters must...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false What operating parameter monitoring equipment must I install, and what operating parameters must I monitor? 60.3043 Section 60.3043 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission...

  10. 40 CFR 60.2944 - What operating parameter monitoring equipment must I install, and what operating parameters must...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false What operating parameter monitoring equipment must I install, and what operating parameters must I monitor? 60.2944 Section 60.2944 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Operator...

  11. Comparison of Orbital Parameters for GEO Debris Predicted by LEGEND and Observed by MODEST: Can Sources of Orbital Debris be Identified?

    NASA Technical Reports Server (NTRS)

    Barker, Edwin S.; Matney, M. J.; Liou, J.-C.; Abercromby, K. J.; Rodriquez, H. M.; Seitzer, P.

    2006-01-01

    Since 2002 the National Aeronautics and Space Administration (NASA) has carried out an optical survey of the debris environment in the geosynchronous Earth-orbit (GEO) region with the Michigan Orbital Debris Survey Telescope (MODEST) in Chile. The survey coverage has been similar for 4 of the 5 years allowing us to follow the orbital evolution of Correlated Targets (CTs), both controlled and un-controlled objects, and Un-Correlated Targets (UCTs). Under gravitational perturbations the distributions of uncontrolled objects, both CTs and UCTs, in GEO orbits will evolve in predictable patterns, particularly evident in the inclination and right ascension of the ascending node (RAAN) distributions. There are several clusters (others have used a "cloud" nomenclature) in observed distributions that show evolution from year to year in their inclination and ascending node elements. However, when MODEST is in survey mode (field-of-view approx.1.3deg) it provides only short 5-8 minute orbital arcs which can only be fit under the assumption of a circular orbit approximation (ACO) to determine the orbital parameters. These ACO elements are useful only in a statistical sense as dedicated observing runs would be required to obtain sufficient orbital coverage to determine a set of accurate orbital elements and then to follow their evolution. Identification of the source(s) for these "clusters of UCTs" would be advantageous to the overall definition of the GEO orbital debris environment. This paper will set out to determine if the ACO elements can be used to in a statistical sense to identify the source of the "clustering of UCTs" roughly centered on an inclination of 12deg and a RAAN of 345deg. The breakup of the Titan 3C-4 transtage on February 21, 1992 has been modeled using NASA s LEGEND (LEO-to-GEO Environment Debris) code to generate a GEO debris cloud. Breakup fragments are created based on the NASA Standard Breakup Model (including fragment size, area-to-mass (A/M), and

  12. Optimization of physiological parameter for macroscopic modeling of reacted singlet oxygen concentration in an in-vivo model

    NASA Astrophysics Data System (ADS)

    Wang, Ken Kang-Hsin; Busch, Theresa M.; Finlay, Jarod C.; Zhu, Timothy C.

    2009-02-01

    Singlet oxygen (1O2) is generally believed to be the major cytotoxic agent during photodynamic therapy (PDT), and the reaction between 1O2 and tumor cells define the treatment efficacy. From a complete set of the macroscopic kinetic equations which describe the photochemical processes of PDT, we can express the reacted 1O2 concentration, [1O2]rx, in a form related to time integration of the product of 1O2 quantum yield and the PDT dose rate. The production of [1O2]rx involves physiological and photophysical parameters which need to be determined explicitly for the photosensitizer of interest. Once these parameters are determined, we expect the computed [1O2]rx to be an explicit dosimetric indicator for clinical PDT. Incorporating the diffusion equation governing the light transport in turbid medium, the spatially and temporally-resolved [1O2]rx described by the macroscopic kinetic equations can be numerically calculated. A sudden drop of the calculated [1O2]rx along with the distance following the decrease of light fluence rate is observed. This suggests that a possible correlation between [1O2]rx and necrosis boundary may occur in the tumor subject to PDT irradiation. In this study, we have theoretically examined the sensitivity of the physiological parameter from two clinical related conditions: (1) collimated light source on semi-infinite turbid medium and (2) linear light source in turbid medium. In order to accurately determine the parameter in a clinical relevant environment, the results of the computed [1O2]rx are expected to be used to fit the experimentally-measured necrosis data obtained from an in vivo animal model.

  13. Look who's talking! Facial appearance can bias source monitoring.

    PubMed

    Nash, Robert A; Bryer, Olwen M; Schlaghecken, Friederike

    2010-05-01

    When we see a stranger's face we quickly form impressions of his or her personality, and expectations of how the stranger might behave. Might these intuitive character judgements bias source monitoring? Participants read headlines "reported" by a trustworthy- and an untrustworthy-looking reporter. Subsequently, participants recalled which reporter provided each headline. Source memory for likely-sounding headlines was most accurate when a trustworthy-looking reporter had provided the headlines. Conversely, source memory for unlikely-sounding headlines was most accurate when an untrustworthy-looking reporter had provided the headlines. This bias appeared to be driven by the use of decision criteria during retrieval rather than differences in memory encoding. Nevertheless, the bias was apparently unrelated to variations in subjective confidence. These results show for the first time that intuitive, stereotyped judgements of others' appearance can bias memory attributions analogously to the biases that occur when people receive explicit information to distinguish sources. We suggest possible real-life consequences of these stereotype-driven source-monitoring biases.

  14. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  15. Source Parameter Estimation using the Second-order Closure Integrated Puff Model

    DTIC Science & Technology

    The sensor measurements are categorized as triggered and non-triggered based on the recorded concentration measurements and a threshold...concentration value. Using each measured value, sources of adjoint material are created from the triggered and non-triggered sensors, and the adjoint transport...equations are solved to predict the adjoint concentration fields. The adjoint source strength is inversely proportional to the concentration measurement

  16. Determining the parameters of Weibull function to estimate the wind power potential in conditions of limited source meteorological data

    NASA Astrophysics Data System (ADS)

    Fetisova, Yu. A.; Ermolenko, B. V.; Ermolenko, G. V.; Kiseleva, S. V.

    2017-04-01

    We studied the information basis for the assessment of wind power potential on the territory of Russia. We described the methodology to determine the parameters of the Weibull function, which reflects the density of distribution of probabilities of wind flow speeds at a defined basic height above the surface of the earth using the available data on the average speed at this height and its repetition by gradations. The application of the least square method for determining these parameters, unlike the use of graphical methods, allows performing a statistical assessment of the results of approximation of empirical histograms by the Weibull formula. On the basis of the computer-aided analysis of the statistical data, it was shown that, at a fixed point where the wind speed changes at different heights, the range of parameter variation of the Weibull distribution curve is relatively small, the sensitivity of the function to parameter changes is quite low, and the influence of changes on the shape of speed distribution curves is negligible. Taking this into consideration, we proposed and mathematically verified the methodology of determining the speed parameters of the Weibull function at other heights using the parameter computations for this function at a basic height, which is known or defined by the average speed of wind flow, or the roughness coefficient of the geological substrate. We gave examples of practical application of the suggested methodology in the development of the Atlas of Renewable Energy Resources in Russia in conditions of deficiency of source meteorological data. The proposed methodology, to some extent, may solve the problem related to the lack of information on the vertical profile of repeatability of the wind flow speeds in the presence of a wide assortment of wind turbines with different ranges of wind-wheel axis heights and various performance characteristics in the global market; as a result, this methodology can become a powerful tool for

  17. Source Parameter Inversion for Recent Great Earthquakes from a Decade-long Observation of Global Gravity Fields

    NASA Technical Reports Server (NTRS)

    Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile

    2013-01-01

    We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.

  18. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source

  19. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  20. Exploratory Movement Generates Higher-Order Information That Is Sufficient for Accurate Perception of Scaled Egocentric Distance

    PubMed Central

    Mantel, Bruno; Stoffregen, Thomas A.; Campbell, Alain; Bardy, Benoît G.

    2015-01-01

    Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate. PMID:25856410

  1. An accurate automated technique for quasi-optics measurement of the microwave diagnostics for fusion plasma

    NASA Astrophysics Data System (ADS)

    Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan

    2017-08-01

    A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.

  2. Identification of vehicle suspension parameters by design optimization

    NASA Astrophysics Data System (ADS)

    Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.

    2014-05-01

    The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.

  3. Parameter uncertainty in simulations of extreme precipitation and attribution studies.

    NASA Astrophysics Data System (ADS)

    Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.

    2017-12-01

    The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.

  4. Preliminary investigation of the effects of eruption source parameters on volcanic ash transport and dispersion modeling using HYSPLIT

    NASA Astrophysics Data System (ADS)

    Stunder, B.

    2009-12-01

    Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes

  5. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  6. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  7. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  8. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1997-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  9. Accurately controlled sequential self-folding structures by polystyrene film

    NASA Astrophysics Data System (ADS)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  10. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  11. The Source Parameters of Echolocation Clicks from Captive and Free-Ranging Yangtze Finless Porpoises (Neophocaena asiaeorientalis asiaeorientalis).

    PubMed

    Fang, Liang; Wang, Ding; Li, Yongtao; Cheng, Zhaolong; Pine, Matthew K; Wang, Kexiong; Li, Songhai

    2015-01-01

    The clicks of Yangtze finless porpoises (Neophocaena asiaeorientalis asiaeorientalis) from 7 individuals in the tank of Baiji aquarium, 2 individuals in a netted pen at Shishou Tian-e-zhou Reserve and 4 free-ranging individuals at Tianxingzhou were recorded using a broadband digital recording system with four element hydrophones. The peak-to-peak apparent source level (ASL_pp) of clicks from individuals at the Baiji aquarium was 167 dB re 1 μPa with mean center frequency of 133 kHz, -3dB bandwidth of 18 kHz and -10 dB duration of 58 μs. The ASL_pp of clicks from individuals at the Shishou Tian-e-zhou Reserve was 180 dB re 1 μPa with mean center frequency of 128 kHz, -3dB bandwidth of 20 kHz and -10 dB duration of 39 μs. The ASL_pp of clicks from individuals at Tianxingzhou was 176 dB re 1 μPa with mean center frequency of 129 kHz, -3dB bandwidth of 15 kHz and -10 dB duration of 48 μs. Differences between the source parameters of clicks among the three groups of finless porpoises suggest these animals adapt to their echolocation signals depending on their surroundings.

  12. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing

  13. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A six-parameter Iwan model and its application

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming

    2016-02-01

    Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.

  15. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  16. An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172

  17. A cochlear implant phantom for evaluating CT acquisition parameters

    NASA Astrophysics Data System (ADS)

    Chakravorti, Srijata; Bussey, Brian J.; Zhao, Yiyuan; Dawant, Benoit M.; Labadie, Robert F.; Noble, Jack H.

    2017-03-01

    Cochlear Implants (CIs) are surgically implantable neural prosthetic devices used to treat profound hearing loss. Recent literature indicates that there is a correlation between the positioning of the electrode array within the cochlea and the ultimate hearing outcome of the patient, indicating that further studies aimed at better understanding the relationship between electrode position and outcomes could have significant implications for future surgical techniques, array design, and processor programming methods. Post-implantation high resolution CT imaging is the best modality for localizing electrodes and provides the resolution necessary to visually identify electrode position, albeit with an unknown degree of accuracy depending on image acquisition parameters, like the HU range of reconstruction, radiation dose, and resolution of the image. In this paper, we report on the development of a phantom that will both permit studying which CT acquisition parameters are best for accurately identifying electrode position and serve as a ground truth for evaluating how different electrode localization methods perform when using different CT scanners and acquisition parameters. We conclude based on our tests that image resolution and HU range of reconstruction strongly affect how accurately the true position of the electrode array can be found by both experts and automatic analysis techniques. The results presented in this paper demonstrate that our phantom is a versatile tool for assessing how CT acquisition parameters affect the localization of CIs.

  18. GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters

    NASA Astrophysics Data System (ADS)

    Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.

    2003-12-01

    The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.

  19. Procedures for establishing geotechnical design parameters from two data sources.

    DOT National Transportation Integrated Search

    2013-07-01

    The Missouri Department of Transportation (MoDOT) recently adopted new provisions for geotechnical design that require that : the mean value and the coefficient of variation (COV) for the mean value of design parameters be established in order to : d...

  20. CHARMM Force-Fields with Modified Polyphosphate Parameters Allow Stable Simulation of the ATP-Bound Structure of Ca(2+)-ATPase.

    PubMed

    Komuro, Yasuaki; Re, Suyong; Kobayashi, Chigusa; Muneyuki, Eiro; Sugita, Yuji

    2014-09-09

    Adenosine triphosphate (ATP) is an indispensable energy source in cells. In a wide variety of biological phenomena like glycolysis, muscle contraction/relaxation, and active ion transport, chemical energy released from ATP hydrolysis is converted to mechanical forces to bring about large-scale conformational changes in proteins. Investigation of structure-function relationships in these proteins by molecular dynamics (MD) simulations requires modeling of ATP in solution and ATP bound to proteins with accurate force-field parameters. In this study, we derived new force-field parameters for the triphosphate moiety of ATP based on the high-precision quantum calculations of methyl triphosphate. We tested our new parameters on membrane-embedded sarcoplasmic reticulum Ca(2+)-ATPase and four soluble proteins. The ATP-bound structure of Ca(2+)-ATPase remains stable during MD simulations, contrary to the outcome in shorter simulations using original parameters. Similar results were obtained with the four ATP-bound soluble proteins. The new force-field parameters were also tested by investigating the range of conformations sampled during replica-exchange MD simulations of ATP in explicit water. Modified parameters allowed a much wider range of conformational sampling compared with the bias toward extended forms with original parameters. A diverse range of structures agrees with the broad distribution of ATP conformations in proteins deposited in the Protein Data Bank. These simulations suggest that the modified parameters will be useful in studies of ATP in solution and of the many ATP-utilizing proteins.

  1. Accurate visible speech synthesis based on concatenating variable length motion capture data.

    PubMed

    Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara

    2006-01-01

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.

  2. The effect of wind and eruption source parameter variations on tephra fallout hazard assessment: an example from Vesuvio (Italy)

    NASA Astrophysics Data System (ADS)

    Macedonio, Giovanni; Costa, Antonio; Scollo, Simona; Neri, Augusto

    2015-04-01

    Uncertainty in the tephra fallout hazard assessment may depend on different meteorological datasets and eruptive source parameters used in the modelling. We present a statistical study to analyze this uncertainty in the case of a sub-Plinian eruption of Vesuvius of VEI = 4, column height of 18 km and total erupted mass of 5 × 1011 kg. The hazard assessment for tephra fallout is performed using the advection-diffusion model Hazmap. Firstly, we analyze statistically different meteorological datasets: i) from the daily atmospheric soundings of the stations located in Brindisi (Italy) between 1962 and 1976 and between 1996 and 2012, and in Pratica di Mare (Rome, Italy) between 1996 and 2012; ii) from numerical weather prediction models of the National Oceanic and Atmospheric Administration and of the European Centre for Medium-Range Weather Forecasts. Furthermore, we modify the total mass, the total grain-size distribution, the eruption column height, and the diffusion coefficient. Then, we quantify the impact that different datasets and model input parameters have on the probability maps. Results shows that the parameter that mostly affects the tephra fallout probability maps, keeping constant the total mass, is the particle terminal settling velocity, which is a function of the total grain-size distribution, particle density and shape. Differently, the evaluation of the hazard assessment weakly depends on the use of different meteorological datasets, column height and diffusion coefficient.

  3. Source Parameters of the 8 October, 2005 Mw7.6 Kashmir Earthquake

    NASA Astrophysics Data System (ADS)

    Mandal, Prantik; Chadha, R. K.; Kumar, N.; Raju, I. P.; Satyamurty, C.

    2007-12-01

    During the last six years, the National Geophysical Research Institute, Hyderabad has established a semi-permanent seismological network of 5 broadband seismographs and 10 accelerographs in the Kachchh seismic zone, Gujarat, with the prime objective to monitor the continued aftershock activity of the 2001 Mw7.7 Bhuj mainshock. The reliable and accurate broadband data for the Mw 7.6 (8 Oct., 2005) Kashmir earthquake and its aftershocks from this network, as well as from the Hyderabad Geoscope station, enabled us to estimate the group velocity dispersion characteristics and the one-dimensional regional shear-velocity structure of peninsular India. Firstly, we measure Rayleigh- and Love-wave group velocity dispersion curves in the range of 8 to 35 sec and invert these curves to estimate the crustal and upper mantle structure below the western part of peninsular India. Our best model suggests a two-layered crust: The upper crust is 13.8-km thick with a shear velocity (Vs) of 3.2 km/s; the corresponding values for the lower crust are 24.9 km and 3.7 km/sec. The shear velocity for the upper mantle is found to be 4.65 km/sec. Based on this structure, we perform a moment tensor (MT) inversion of the bandpass (0.05 0.02 Hz) filtered seismograms of the Kashmir earthquake. The best fit is obtained for a source located at a depth of 30 km, with a seismic moment, Mo, of 1.6 × 1027 dyne-cm, and a focal mechanism with strike 19.5°, dip 42°, and rake 167°. The long-period magnitude (MA ~ Mw) of this earthquake is estimated to be 7.31. An analysis of well-developed sPn and sSn regional crustal phases from the bandpassed (0.02 0.25 Hz) seismograms of this earthquake at four stations in Kachchh suggests a focal depth of 30.8 km.

  4. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  5. The impact of realistic source shape and flexibility on source mask optimization

    NASA Astrophysics Data System (ADS)

    Aoyama, Hajime; Mizuno, Yasushi; Hirayanagi, Noriyuki; Kita, Naonori; Matsui, Ryota; Izumi, Hirohiko; Tajima, Keiichi; Siebert, Joachim; Demmerle, Wolfgang; Matsuyama, Tomoyuki

    2013-04-01

    Source mask optimization (SMO) is widely used to make state-of-the-art semiconductor devices in high volume manufacturing. To realize mature SMO solutions in production, the Intelligent Illuminator, which is an illumination system on Nikon scanner, is useful because it can provide generation of freeform sources with high fidelity to the target. Proteus SMO, which employs co-optimization method and an insertion of validation with mask 3D effect and resist properties for an accurate prediction of wafer printing, can take into account the properties of Intelligent Illuminator. We investigate an impact of the source properties on the SMO to pattern of a static-random access memory. Quality of a source made on the scanner compared to the SMO target is evaluated with in-situ measurement and aerial image simulation using its measurement data. Furthermore we discuss an evaluation of a universality of the source to use it in multiple scanners with a validation with estimated value of scanner errors.

  6. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR

  7. Effects of Supplemental Chromium Source and Concentration on Growth, Carcass Characteristics, and Serum Lipid Parameters of Broilers Reared Under Normal Conditions.

    PubMed

    Zheng, Cancai; Huang, Yanling; Xiao, Fang; Lin, Xi; Lloyd, Karen

    2016-02-01

    An experiment was conducted to investigate the effects of dietary chromium (Cr) source and concentration on growth performance, carcass traits, and some serum lipid parameters of broilers under normal rearing conditions for 42 days. A total of 252 1-day-old Cobb 500 commercial female broilers were randomly allotted by body weight (BW) to one of six replicate cages (six broilers per cage) for each of seven treatments in a completely randomized design involved in a 2 × 3 factorial arrangement of treatments with three Cr sources (Cr propionate (CrPro), Cr picolinate (CrPic), Cr chloride (CrCl3)) and two concentrations of added Cr (0.4 and 2.0 mg of Cr/kg) plus a Cr-unsupplemented control diet. The results showed that dietary Cr supplementation tended to increase the breast muscle percentage compared with the Cr-unsupplemented control group (P = 0.0784), while Cr from CrPic tended to have higher breast muscle percentage compared with Cr from CrCl3 (P = 0.0881). Chromium from CrPic also tended to increase the breast intramuscular fat (IMF) compared with Cr from CrCl3 (P = 0.0648). In addition, supplementation of 0.4 mg/kg Cr tended to decrease low-density lipoprotein cholesterol (LDL-C) (P = 0.0614). Compared with the control group, broilers fed Cr-supplemented diets had higher triglyceride (TG) (P = 0.0129) regardless of Cr source and Cr concentration. Chromium from CrPro and CrPic had lower total cholesterol (TC) compared with Cr from CrCl3 (P = 0.0220). These results indicate that dietary supplementation of Cr has effects on carcass characteristics and serum lipid parameters of broilers under normal rearing conditions, while supplementation of organic Cr can improve carcass characteristics and reduce the cholesterol content in serum.

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  9. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  13. Influence of the volume and density functions within geometric models for estimating trunk inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A

    2010-02-01

    The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).

  14. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  15. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  16. Evaluation of Effective Sources in Uncertainty Measurements of Personal Dosimetry by a Harshaw TLD System

    PubMed Central

    Hosseini Pooya, SM; Orouji, T

    2014-01-01

    Background: The accurate results of the individual doses in personal dosimety which are reported by the service providers in personal dosimetry are very important. There are national / international criteria for acceptable dosimetry system performance. Objective: In this research, the sources of uncertainties are identified, measured and calculated in a personal dosimetry system by TLD. Method: These sources are included; inhomogeneity of TLDs sensitivity, variability of TLD readings due to limited sensitivity and background, energy dependence, directional dependence, non-linearity of the response, fading, dependent on ambient temperature / humidity and calibration errors, which may affect on the dose responses. Some parameters which influence on the above sources of uncertainty are studied for Harshaw TLD-100 cards dosimeters as well as the hot gas Harshaw 6600 TLD reader system. Results: The individual uncertainties of each sources was measured less than 6.7% in 68% confidence level. The total uncertainty was calculated 17.5% with 95% confidence level. Conclusion: The TLD-100 personal dosimeters as well as the Harshaw TLD-100 reader 6600 system show the total uncertainty value which is less than that of admissible value of 42% for personal dosimetry services. PMID:25505769

  17. A West Virginia case study: does erosion differ between streambanks clustered by the bank assessment of nonpoint source consequences of sediment (BANCS) model parameters?

    Treesearch

    Abby L. McQueen; Nicolas P. Zegre; Danny L. Welsch

    2013-01-01

    The integration of factors and processes responsible for streambank erosion is complex. To explore the influence of physical variables on streambank erosion, parameters for the bank assessment of nonpoint source consequences of sediment (BANCS) model were collected on a 1-km reach of Horseshoe Run in Tucker County, West Virginia. Cluster analysis was used to establish...

  18. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  19. Tracking antibiotic resistance gene pollution from different sources using machine-learning classification.

    PubMed

    Li, Li-Guan; Yin, Xiaole; Zhang, Tong

    2018-05-24

    Antimicrobial resistance (AMR) has been a worldwide public health concern. Current widespread AMR pollution has posed a big challenge in accurately disentangling source-sink relationship, which has been further confounded by point and non-point sources, as well as endogenous and exogenous cross-reactivity under complicated environmental conditions. Because of insufficient capability in identifying source-sink relationship within a quantitative framework, traditional antibiotic resistance gene (ARG) signatures-based source-tracking methods would hardly be a practical solution. By combining broad-spectrum ARG profiling with machine-learning classification SourceTracker, here we present a novel way to address the question in the era of high-throughput sequencing. Its potential in extensive application was firstly validated by 656 global-scale samples covering diverse environmental types (e.g., human/animal gut, wastewater, soil, ocean) and broad geographical regions (e.g., China, USA, Europe, Peru). Its potential and limitations in source prediction as well as effect of parameter adjustment were then rigorously evaluated by artificial configurations with representative source proportions. When applying SourceTracker in region-specific analysis, excellent performance was achieved by ARG profiles in two sample types with obvious different source compositions, i.e., influent and effluent of wastewater treatment plant. Two environmental metagenomic datasets of anthropogenic interference gradient further supported its potential in practical application. To complement general-profile-based source tracking in distinguishing continuous gradient pollution, a few generalist and specialist indicator ARGs across ecotypes were identified in this study. We demonstrated for the first time that the developed source-tracking platform when coupling with proper experiment design and efficient metagenomic analysis tools will have significant implications for assessing AMR pollution

  20. Line Narrowing Parameter Measurement by Modulation Spectroscopy

    NASA Technical Reports Server (NTRS)

    Dharamsi, Amin N.

    1998-01-01

    Accurate Characterization of Oxygen A-Band Line Parameters by Wavelength Modulation Spectroscopy with tunable diode lasers is an ongoing research at Old Dominion University, under sponsorship from NASA Langley research Center. The work proposed here will be undertaken under the guidance of Dr. William Chu and Dr. Lamont Poole of the Aerosol Research Branch at NASA Langley-Research Center in Hampton, Virginia. The research was started about two years ago and utilizes wavelength modulation absorption spectroscopy with higher harmonic detection, a technique that we developed at Old Dominion University, to obtain the absorption line characteristics of the Oxygen A-band rovibronic lines. Accurate characterization of this absorption band is needed for processing of data that will be obtained in experiments such as the NASA Stratospheric Aerosol and Gas Experiment III (SAGE III) as part of the US Mission to Planet Earth. The research work for Summer Fellowship undertook a measurement of the Dicke line-narrowing parameters of the Oxygen A-Band lines by using wavelength modulation spectroscopy. Our previous theoretical results had indicated that such a measurement could be done sensitively and in a convenient fashion by using this type of spectroscopy. In particular, theoretical results had indicated that the signal magnitude would depend on pressure in a manner that was very sensitive to the narrowing parameter. One of the major tasks undertaken during the summer of 1998 was to establish experimentally that these theoretical predictions were correct. This was done successfully and the results of the work are being prepared for publication. Experimental Results were obtained in which the magnitude of the signal was measured as a function of pressure, for various harmonic detection orders (N = 1, 2, 3, 4, 5). A comparison with theoretical results was made, and it was shown that the agreement between theory and experiment was very good. More importantly, however, it was shown

  1. Accurate macromolecular structures using minimal measurements from X-ray free-electron lasers

    PubMed Central

    Hattne, Johan; Echols, Nathaniel; Tran, Rosalie; Kern, Jan; Gildea, Richard J.; Brewster, Aaron S.; Alonso-Mori, Roberto; Glöckner, Carina; Hellmich, Julia; Laksmono, Hartawan; Sierra, Raymond G.; Lassalle-Kaiser, Benedikt; Lampe, Alyssa; Han, Guangye; Gul, Sheraz; DiFiore, Dörte; Milathianaki, Despina; Fry, Alan R.; Miahnahri, Alan; White, William E.; Schafer, Donald W.; Seibert, M. Marvin; Koglin, Jason E.; Sokaras, Dimosthenis; Weng, Tsu-Chien; Sellberg, Jonas; Latimer, Matthew J.; Glatzel, Pieter; Zwart, Petrus H.; Grosse-Kunstleve, Ralf W.; Bogan, Michael J.; Messerschmidt, Marc; Williams, Garth J.; Boutet, Sébastien; Messinger, Johannes; Zouni, Athina; Yano, Junko; Bergmann, Uwe; Yachandra, Vittal K.; Adams, Paul D.; Sauter, Nicholas K.

    2014-01-01

    X-ray free-electron laser (XFEL) sources enable the use of crystallography to solve three-dimensional macromolecular structures under native conditions and free from radiation damage. Results to date, however, have been limited by the challenge of deriving accurate Bragg intensities from a heterogeneous population of microcrystals, while at the same time modeling the X-ray spectrum and detector geometry. Here we present a computational approach designed to extract statistically significant high-resolution signals from fewer diffraction measurements. PMID:24633409

  2. Accurate spectroscopic characterization of oxirane: A valuable route to its identification in Titan's atmosphere and the assignment of unidentified infrared bands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puzzarini, Cristina; Biczysko, Malgorzata; Bloino, Julien

    2014-04-20

    In an effort to provide an accurate spectroscopic characterization of oxirane, state-of-the-art computational methods and approaches have been employed to determine highly accurate fundamental vibrational frequencies and rotational parameters. Available experimental data were used to assess the reliability of our computations, and an accuracy on average of 10 cm{sup –1} for fundamental transitions as well as overtones and combination bands has been pointed out. Moving to rotational spectroscopy, relative discrepancies of 0.1%, 2%-3%, and 3%-4% were observed for rotational, quartic, and sextic centrifugal-distortion constants, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be usefulmore » for identification of oxirane in Titan's atmosphere and the assignment of unidentified infrared bands. Since oxirane was already observed in the interstellar medium and some astronomical objects are characterized by very high D/H ratios, we also considered the accurate determination of the spectroscopic parameters for the mono-deuterated species, oxirane-d1. For the latter, an empirical scaling procedure allowed us to improve our computed data and to provide predictions for rotational transitions with a relative accuracy of about 0.02% (i.e., an uncertainty of about 40 MHz for a transition lying at 200 GHz).« less

  3. Accurate Vibrational-Rotational Parameters and Infrared Intensities of 1-Bromo-1-fluoroethene: A Joint Experimental Analysis and Ab Initio Study.

    PubMed

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Giorgianni, Santi; Bloino, Julien; Tasinato, Nicola; Carnimeo, Ivan; Biczysko, Malgorzata; Puzzarini, Cristina

    2017-05-04

    The medium-resolution gas-phase infrared (IR) spectra of 1-bromo-1-fluoroethene (BrFC═CH 2 , 1,1-C 2 H 2 BrF) were investigated in the range 300-6500 cm -1 , and the vibrational analysis led to the assignment of all fundamentals as well as many overtone and combination bands up to three quanta, thus giving an accurate description of its vibrational structure. Integrated band intensity data were determined with high precision from the measurements of their corresponding absorption cross sections. The vibrational analysis was supported by high-level ab initio investigations. CCSD(T) computations accounting for extrapolation to the complete basis set and core correlation effects were employed to accurately determine the molecular structure and harmonic force field. The latter was then coupled to B2PLYP and MP2 computations in order to account for mechanical and electrical anharmonicities. Second-order perturbative vibrational theory was then applied to the thus obtained hybrid force fields to support the experimental assignment of the IR spectra.

  4. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  5. The HelCat dual-source plasma device.

    PubMed

    Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue

    2009-10-01

    The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.

  6. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the

  7. A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.

    We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less

  8. Variability and Reproducibility of 3rd-generation dual-source dynamic volume perfusion CT Parameters in Comparison to MR-perfusion Parameters in Rectal Cancer.

    PubMed

    Sudarski, Sonja; Henzler, Thomas; Floss, Teresa; Gaa, Tanja; Meyer, Mathias; Haubenreisser, Holger; Schoenberg, Stefan O; Attenberger, Ulrike I

    2018-05-02

    To compare in patients with untreated rectal cancer quantitative perfusion parameters calculated from 3 rd -generation dual-source dynamic volume perfusion CT (dVPCT) with 3-Tesla-MR-perfusion with regard to data variability and tumour differentiation. In MR-perfusion, plasma flow (PF), plasma volume (PV) and mean transit time (MTT) were assessed in two measurements (M1 and M2) by the same reader. In dVPCT, blood flow (BF), blood volume (BV), MTT and permeability (PERM) were assessed respectively. CT dose values were calculated. 20 patients (60 ± 13 years) were analysed. Intra-individual and intra-reader variability of duplicate MR-perfusion measurements was higher compared to duplicate dVPCT measurements. dVPCT-derived BF, BV and PERM could differentiate between tumour and normal rectal wall (significance level for M1 and M2, respectively, regarding BF: p < 0.0001*/0.0001*; BV: p < 0.0001*/0.0001*; MTT: p = 0.93/0.39; PERM: p < 0.0001*/0.0001*), with MR-perfusion this was true for PF and PV (p-values M1/M2 for PF: p = 0.04*/0.01*; PV: p = 0.002*/0.003*; MTT: p = 0.70/0.27*). Mean effective dose of CT-staging incl. dVPCT was 29 ± 6 mSv (20 ± 5 mSv for dVPCT alone). In conclusion, dVPCT has a lower data variability than MR-perfusion while both dVPCT and MR-perfusion could differentiate tumour tissue from normal rectal wall. With 3 rd -generation dual-source CT dVPCT could be included in a standard CT-staging without exceeding national dose reference values.

  9. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ianjamasimanana, R.; Blok, W. J. G. de; Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by Themore » H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.« less

  10. Accurate Binding Free Energy Predictions in Fragment Optimization.

    PubMed

    Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody

    2015-11-23

    Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.

  11. Cometary spliting - a source for the Jupiter family?

    NASA Astrophysics Data System (ADS)

    Pittich, E. M.; Rickman, H.

    1994-01-01

    The quest for the origin of the Jupiter family of comets includes investigating the possibility that a large fraction of this population originates from past splitting events. In particular, one suggested scenario, albeit less attractive on physical grounds, maintains that a giant comet breakup is a major source of short-period comets. By simulating such events and integrating the motions of the fictitious fragments in an accurate solar system model for the typical lifetime of Jupiter family comets, it is possible to check whether the outcome may or may not be compatible with the observed orbital distribution. In this paper we present such integrations for a few typical progenitor orbits and analyze the ensuing thermalization process with particular attention to the Tisserand parameters. It is found that the sets of fragments lose their memory of a common origin very rapidly so that, in general terms, it is difficult to use the random appearance of the observed orbital distribution as evidence against the giant comet splitting hypothesis.

  12. A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.

    PubMed

    Pandis, Petros; Bull, Anthony Mj

    2017-11-01

    Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.

  13. Effects of noise levels and call types on the source levels of killer whale calls.

    PubMed

    Holt, Marla M; Noren, Dawn P; Emmons, Candice K

    2011-11-01

    Accurate parameter estimates relevant to the vocal behavior of marine mammals are needed to assess potential effects of anthropogenic sound exposure including how masking noise reduces the active space of sounds used for communication. Information about how these animals modify their vocal behavior in response to noise exposure is also needed for such assessment. Prior studies have reported variations in the source levels of killer whale sounds, and a more recent study reported that killer whales compensate for vessel masking noise by increasing their call amplitude. The objectives of the current study were to investigate the source levels of a variety of call types in southern resident killer whales while also considering background noise level as a likely factor related to call source level variability. The source levels of 763 discrete calls along with corresponding background noise were measured over three summer field seasons in the waters surrounding the San Juan Islands, WA. Both noise level and call type were significant factors on call source levels (1-40 kHz band, range of 135.0-175.7 dB(rms) re 1 [micro sign]Pa at 1 m). These factors should be considered in models that predict how anthropogenic masking noise reduces vocal communication space in marine mammals.

  14. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning

    PubMed Central

    Silva, Susana F.; Domingues, José Paulo

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed. PMID:29599938

  15. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    PubMed

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  16. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  17. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  18. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery

  19. Accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations on rectangular domains

    NASA Astrophysics Data System (ADS)

    Ji, Songsong; Yang, Yibo; Pang, Gang; Antoine, Xavier

    2018-01-01

    The aim of this paper is to design some accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations in rectangular domains. The Laplace transform in time and discrete Fourier transform in space are applied to get Green's functions of the semi-discretized equations in unbounded domains with single-source. An algorithm is given to compute these Green's functions accurately through some recurrence relations. Furthermore, the finite-difference method is used to discretize the reduced problem with accurate boundary conditions. Numerical simulations are presented to illustrate the accuracy of our method in the case of the linear Schrödinger and heat equations. It is shown that the reflection at the corners is correctly eliminated.

  20. Theoretical Interpretation of the Measurement of Diffusion Parameters with Pulsed Neutron Source; INTERPRETAZIONE TEORICA DELLE MISURE DI PARAMETRI DI DIFFUSIONE COL METODO DELLE SORGENTI NEUTRONICHE PULSATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boffi, V.C.; Molinari, V.G.; Parks, D.E.

    1962-05-01

    Features of the pulsed neution source theory connected with the measurement of diffusion parameters are discussed. Various analytical procedures for determining the decay constant of the fully thermalized neutron flux are compared. The problem of the diffusion coefficient definition is also considered in some detail. (auth)

  1. Accurate estimations of electromagnetic transitions of Sn IV for stellar and interstellar media

    NASA Astrophysics Data System (ADS)

    Biswas, Swapan; Das, Arghya; Bhowmik, Anal; Majumder, Sonjoy

    2018-04-01

    Here we report on accurate ab initio calculations to study astrophysically important electromagnetic transition parameters among different low-lying states of Sn IV. Our ab initio calculations are based on the sophisticated relativistic coupled-cluster theory, which almost exhausts many important electron correlations. To establish the accuracy of the calculations, we compare our results with the available experiments and estimates the transition amplitudes in length and velocity gauged forms. Most of these allowed and forbidden transition wavelengths lie in the infrared region, and they can be observed in the different cool stellar and interstellar media. For the improvement of uncertainty, we use experimental energies to the estimations of the above transition parameters. The presented data will be helpful to find the abundances of the ion in different astrophysical and laboratory plasma.

  2. Accurate estimations of electromagnetic transitions of Sn IV for stellar and interstellar media

    NASA Astrophysics Data System (ADS)

    Biswas, Swapan; Das, Arghya; Bhowmik, Anal; Majumder, Sonjoy

    2018-07-01

    Here, we report on accurate ab initio calculations to study astrophysically important electromagnetic transition parameters among different low-lying states of Sn IV. Our ab initio calculations are based on the sophisticated relativistic coupled cluster theory, which almost exhausts many important electron correlations. To establish the accuracy of the calculations, we compare our results with the available experiments and estimate the transition amplitudes in length and velocity gauged forms. Most of these allowed and forbidden transition wavelengths lie in the infrared region, and they can be observed in the different cool stellar and interstellar media. For the improvement of uncertainty, we use experimental energies to the estimations of the above transition parameters. The presented data will be helpful to find the abundances of the ion in different astrophysical and laboratory plasma.

  3. Earthquake source parameter and focal mechanism estimates for the Western Quebec Seismic Zone in eastern Canada

    NASA Astrophysics Data System (ADS)

    Rodriguez Padilla, A. M.; Onwuemeka, J.; Liu, Y.; Harrington, R. M.

    2017-12-01

    The Western Quebec Seismic Zone (WQSZ) is a 160-km-wide band of intraplate seismicity extending 500 km from the Adirondack Highlands (United States) to the Laurentian uplands (Canada). Historically, the WQSZ has experienced over fifteen earthquakes above magnitude 5, with the noteworthy MN5.2 Ladysmith event on May 17, 2013. Previous studies have associated seismicity in the area to the reactivation of Early Paleozoic normal faults within a failed Iapetan rift arm, or strength contrasts between mafic intrusions and felsic rocks due to the Mesozoic track of the Great Meteor hotspot. A good understanding of seismicity and its relation to pre-existing structures requires information about event source properties, such as static stress drop and fault plane orientation, which can be constrained via spectral analysis and focal mechanism solutions. Using data recorded by the CNSN and USArray Transportable Array, we first characterize b-value for 709 events between 2012 and 2016 in WQSZ, obtaining a value of 0.75. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for 35 events MN 2.7+. We select event pairs with highly similar waveforms, proximal hypocenters, and magnitudes differing by 1-2 units. Our preliminary results using single-station spectra show corner frequencies of 15 to 40 Hz and stress drop values between 7 and 130 MPa, typical of intraplate seismicity. Last, we solve focal mechanism solutions of 35 events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. Our preliminary results suggest predominantly thrust faulting mechanisms, and at times oblique thrust faulting. The P-axis trend of the focal mechanism solutions suggests a principal stress orientation of NE-SW, which is consistent with that derived from focal mechanisms of earthquakes prior to 2013. We plan to fit the event pair spectral ratios to correct for attenuation

  4. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  6. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  7. Alignment of leading-edge and peak-picking time of arrival methods to obtain accurate source locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussel-Dupre, R.; Symbalisty, E.; Fox, C.

    2009-08-01

    The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but themore » final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).« less

  8. Nanoscale MOS devices: device parameter fluctuations and low-frequency noise (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Wong, Hei; Iwai, Hiroshi; Liou, J. J.

    2005-05-01

    It is well-known in conventional MOS transistors that the low-frequency noise or flicker noise is mainly contributed by the trapping-detrapping events in the gate oxide and the mobility fluctuation in the surface channel. In nanoscale MOS transistors, the number of trapping-detrapping events becomes less important because of the large direct tunneling current through the ultrathin gate dielectric which reduces the probability of trapping-detrapping and the level of leakage current fluctuation. Other noise sources become more significant in nanoscale devices. The source and drain resistance noises have greater impact on the drain current noise. Significant contribution of the parasitic bipolar transistor noise in ultra-short channel and channel mobility fluctuation to the channel noise are observed. The channel mobility fluctuation in nanoscale devices could be due to the local composition fluctuation of the gate dielectric material which gives rise to the permittivity fluctuation along the channel and results in gigantic channel potential fluctuation. On the other hand, the statistical variations of the device parameters across the wafer would cause the noise measurements less accurate which will be a challenge for the applicability of analytical flicker noise model as a process or device evaluation tool for nanoscale devices. Some measures for circumventing these difficulties are proposed.

  9. Application of lab derived kinetic biodegradation parameters at the field scale

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.

    2003-04-01

    Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way

  10. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  11. Geodetic analysis of disputed accurate qibla direction

    NASA Astrophysics Data System (ADS)

    Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah

    2018-04-01

    Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.

  12. Plasma diagnosis as a tool for the determination of the parameters of electron beam evaporation and sources of ionization

    NASA Astrophysics Data System (ADS)

    Mukherjee, Jaya; Dileep Kumar, V.; Yadav, S. P.; Barnwal, Tripti A.; Dikshit, Biswaranjan

    2016-07-01

    The atomic vapor generated by electron beam heating is partially ionized due to atom-atom collisions (Saha ionization) and electron impact ionization, which depend upon the source temperature and area of evaporation as compared to the area of electron beam bombardment on the target. When electron beam evaporation is carried out by inserting the target inside an insulating liner to reduce conductive heat loss, it is expected that the area of evaporation becomes significantly more than the area of electron beam bombardment on the target, resulting in reduced electron impact ionization. To assess this effect and to quantify the parameters of evaporation, such as temperature and area of evaporation, we have carried out experiments using zirconium, tin and aluminum as a target. By measuring the ion content using a Langmuir probe, in addition to measuring the atomic vapor flux at a specific height, and by combining the experimental data with theoretical expressions, we have established a method for simultaneously inferring the source temperature, evaporation area and ion fraction. This assumes significance because the temperature cannot be reliably measured by an optical pyrometer due to the wavelength dependent source emissivity and reflectivity of thin film mirrors. In addition, it also cannot be inferred from only the atomic flux data at a certain height as the area of evaporation is unknown (it can be much more than the area of electron bombardment, especially when the target is placed in a liner). Finally, the reason for the lower observed electron temperatures of the plasma for all the three cases is found to be the energy loss due to electron impact excitation of the atomic vapor during its expansion from the source.

  13. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  14. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  15. Comparative Study of Light Sources for Household

    NASA Astrophysics Data System (ADS)

    Pawlak, Andrzej; Zalesińska, Małgorzata

    2017-03-01

    The article describes test results that provided the ground to define and evaluate basic photometric, colorimetric and electric parameters of selected, widely available light sources, which are equivalent to a traditional incandescent 60-Watt light bulb. Overall, one halogen light bulb, three compact fluorescent lamps and eleven LED light sources were tested. In general, it was concluded that in most cases (branded products, in particular) the measured and calculated parameters differ from the values declared by manufacturers only to a small degree. LED sources prove to be the most beneficial substitute for traditional light bulbs, considering both their operational parameters and their price, which is comparable with the price of compact fluorescent lamps or, in some instances, even lower.

  16. The Source Parameters of Echolocation Clicks from Captive and Free-Ranging Yangtze Finless Porpoises (Neophocaena asiaeorientalis asiaeorientalis)

    PubMed Central

    Fang, Liang; Wang, Ding; Li, Yongtao; Cheng, Zhaolong; Pine, Matthew K.; Wang, Kexiong; Li, Songhai

    2015-01-01

    The clicks of Yangtze finless porpoises (Neophocaena asiaeorientalis asiaeorientalis) from 7 individuals in the tank of Baiji aquarium, 2 individuals in a netted pen at Shishou Tian-e-zhou Reserve and 4 free-ranging individuals at Tianxingzhou were recorded using a broadband digital recording system with four element hydrophones. The peak-to-peak apparent source level (ASL_pp) of clicks from individuals at the Baiji aquarium was 167 dB re 1 μPa with mean center frequency of 133 kHz, -3dB bandwidth of 18 kHz and -10 dB duration of 58 μs. The ASL_pp of clicks from individuals at the Shishou Tian-e-zhou Reserve was 180 dB re 1 μPa with mean center frequency of 128 kHz, -3dB bandwidth of 20 kHz and -10 dB duration of 39 μs. The ASL_pp of clicks from individuals at Tianxingzhou was 176 dB re 1 μPa with mean center frequency of 129 kHz, -3dB bandwidth of 15 kHz and -10 dB duration of 48 μs. Differences between the source parameters of clicks among the three groups of finless porpoises suggest these animals adapt to their echolocation signals depending on their surroundings. PMID:26053758

  17. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  18. Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.

    PubMed

    Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi

    2018-05-28

    Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.

  19. Accuracy of self-reported survey data on assisted reproductive technology treatment parameters and reproductive history.

    PubMed

    Stern, Judy E; McLain, Alexander C; Buck Louis, Germaine M; Luke, Barbara; Yeung, Edwina H

    2016-08-01

    It is unknown whether data obtained from maternal self-report for assisted reproductive technology treatment parameters and reproductive history are accurate for use in research studies. We evaluated the accuracy of self-reported in assisted reproductive technology treatment and reproductive history from the Upstate KIDS study in comparison with clinical data reported to the Society for Assisted Reproductive Technology Clinic Outcome Reporting System. Upstate KIDS maternal questionnaire data from deliveries between 2008 and 2010 were linked to data reported to Society for Assisted Reproductive Technology Clinic Outcome Reporting System. The 617 index deliveries were compared as to treatment type (frozen embryo transfer and donor egg or sperm) and use of intracytoplasmic sperm injection and assisted hatching. Use of injectable medications, self-report for assisted reproductive technology, or frozen embryo transfer prior to the index deliveries were also compared. We report agreement in which both sources had yes or both no and sensitivity of maternal report using Society for Assisted Reproductive Technology Clinic Outcome Reporting System as the gold standard. Significance was determined using χ(2) at P < 0.05. Universal agreement was not reached on any parameter but was best for treatment type of frozen embryo transfer (agreement, 96%; sensitivity, 93%) and use of donor eggs (agreement, 97%; sensitivity, 82%) or sperm (agreement, 98%; sensitivity, 82%). Use of intracytoplasmic sperm injection (agreement, 78%: sensitivity, 78%) and assisted hatching (agreement, 57%; sensitivity, 38%) agreed less well with self-reported use (P < .0001). In vitro fertilization (agreement, 82%) and frozen embryo transfer (agreement, 90%) prior to the index delivery were more consistently reported than was use of injectable medication (agreement, 76%) (P < .0001). Women accurately report in vitro fertilization treatment but are less accurate about procedures handled in the

  20. Data format standard for sharing light source measurements

    NASA Astrophysics Data System (ADS)

    Gregory, G. Groot; Ashdown, Ian; Brandenburg, Willi; Chabaud, Dominique; Dross, Oliver; Gangadhara, Sanjay; Garcia, Kevin; Gauvin, Michael; Hansen, Dirk; Haraguchi, Kei; Hasna, Günther; Jiao, Jianzhong; Kelley, Ryan; Koshel, John; Muschaweck, Julius

    2013-09-01

    Optical design requires accurate characterization of light sources for computer aided design (CAD) software. Various methods have been used to model sources, from accurate physical models to measurement of light output. It has become common practice for designers to include measured source data for design simulations. Typically, a measured source will contain rays which sample the output distribution of the source. The ray data must then be exported to various formats suitable for import into optical analysis or design software. Source manufacturers are also making measurements of their products and supplying CAD models along with ray data sets for designers. The increasing availability of data has been beneficial to the design community but has caused a large expansion in storage needs for the source manufacturers since each software program uses a unique format to describe the source distribution. In 2012, the Illuminating Engineering Society (IES) formed a working group to understand the data requirements for ray data and recommend a standard file format. The working group included representatives from software companies supplying the analysis and design tools, source measurement companies providing metrology, source manufacturers creating the data and users from the design community. Within one year the working group proposed a file format which was recently approved by the IES for publication as TM-25. This paper will discuss the process used to define the proposed format, highlight some of the significant decisions leading to the format and list the data to be included in the first version of the standard.