Sample records for parameter calculation capability

  1. Analysis of the passive stabilization of the long duration exposure facility

    NASA Technical Reports Server (NTRS)

    Siegel, S. H.; Vishwanath, N. S.

    1977-01-01

    The nominal Long Duration Exposure Facility (LDEF) configurations and the anticipated orbit parameters are presented. A linear steady state analysis was performed using these parameters. The effects of orbit eccentricity, solar pressure, aerodynamic pressure, magnetic dipole, and the magnetically anchored rate damper were evaluated to determine the configuration sensitivity to variations in these parameters. The worst case conditions for steady state errors were identified, and the performance capability calculated. Garber instability bounds were evaluated for the range of configuration and damping coefficients under consideration. The transient damping capabilities of the damper were examined, and the time constant as a function of damping coefficient and spacecraft moment of inertia determined. The capture capabilities of the damper were calculated, and the results combined with steady state, transient, and Garber instability analyses to select damper design parameters.

  2. Comprehensive Three-Dimensional Analysis of the Neuroretinal Rim in Glaucoma Using High-Density Spectral-Domain Optical Coherence Tomography Volume Scans

    PubMed Central

    Tsikata, Edem; Lee, Ramon; Shieh, Eric; Simavli, Huseyin; Que, Christian J.; Guo, Rong; Khoueir, Ziad; de Boer, Johannes; Chen, Teresa C.

    2016-01-01

    Purpose To describe spectral-domain optical coherence tomography (OCT) methods for quantifying neuroretinal rim tissue in glaucoma and to compare these methods to the traditional retinal nerve fiber layer thickness diagnostic parameter. Methods Neuroretinal rim parameters derived from three-dimensional (3D) volume scans were compared with the two-dimensional (2D) Spectralis retinal nerve fiber layer (RNFL) thickness scans for diagnostic capability. This study analyzed one eye per patient of 104 glaucoma patients and 58 healthy subjects. The shortest distances between the cup surface and the OCT-based disc margin were automatically calculated to determine the thickness and area of the minimum distance band (MDB) neuroretinal rim parameter. Traditional 150-μm reference surface–based rim parameters (volume, area, and thickness) were also calculated. The diagnostic capabilities of these five parameters were compared with RNFL thickness using the area under the receiver operating characteristic (AUROC) curves. Results The MDB thickness had significantly higher diagnostic capability than the RNFL thickness in the nasal (0.913 vs. 0.818, P = 0.004) and temporal (0.922 vs. 0.858, P = 0.026) quadrants and the inferonasal (0.950 vs. 0.897, P = 0.011) and superonasal (0.933 vs. 0.868, P = 0.012) sectors. The MDB area and the three neuroretinal rim parameters based on the 150-μm reference surface had diagnostic capabilities similar to RNFL thickness. Conclusions The 3D MDB thickness had a high diagnostic capability for glaucoma and may be of significant clinical utility. It had higher diagnostic capability than the RNFL thickness in the nasal and temporal quadrants and the inferonasal and superonasal sectors. PMID:27768203

  3. Importance biasing scheme implemented in the PRIZMA code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-12-31

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.

  4. Practical calibration of design data to technical capabilities of horizontal directional drilling rig

    NASA Astrophysics Data System (ADS)

    Toropov, S. Yu; Toropov, V. S.

    2018-05-01

    In order to design more accurately trenchless pipeline passages, a technique has been developed for calculating the passage profile, based on specific parameters of the horizontal directional drilling rig, including the range of possible drilling angles and a list of compatible drill pipe sets. The algorithm for calculating the parameters of the trenchless passage profile is shown in the paper. This algorithm is based on taking into account the features of HDD technology, namely, three different stages of production. The authors take into account that the passage profile is formed at the first stage of passage construction, that is, when drilling a pilot well. The algorithm involves calculating the profile by taking into account parameters of the drill pipes used and angles of their deviation relative to each other during the pilot drilling. This approach allows us to unambiguously calibrate the designed profile for the HDD rig capabilities and the auxiliary and navigation equipment used in the construction process.

  5. Linking Combat Systems Capabilities and Ship Design Through Modeling and Computer Simulation

    DTIC Science & Technology

    2013-09-01

    23 C. OVERVIEW OF FIVE—PARAMETER METHOD .................................24 1. Lift /Drag Ratio (L/D Ratio...FOR TESTING ..............29 1. Parameter 1: Lift /Drag Ratio (calculated value) ............................29 2. Parameter 2: Overall Propulsion...34 G. METRIC CONVERSIONS—JANE’S DATA ............................................35 H. DECOMPOSITION – LIFT TO DRAG RATIO AND

  6. Quiet High Speed Fan (QHSF) Flutter Calculations Using the TURBO Code

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Min, James B.; Mehmed, Oral

    2006-01-01

    A scale model of the NASA/Honeywell Engines Quiet High Speed Fan (QHSF) encountered flutter wind tunnel testing. This report documents aeroelastic calculations done for the QHSF scale model using the blade vibration capability of the TURBO code. Calculations at design speed were used to quantify the effect of numerical parameters on the aerodynamic damping predictions. This numerical study allowed the selection of appropriate values of these parameters, and also allowed an assessment of the variability in the calculated aerodynamic damping. Calculations were also done at 90 percent of design speed. The predicted trends in aerodynamic damping corresponded to those observed during testing.

  7. Comparison Of A Neutron Kinetics Parameter For A Polyethylene Moderated Highly Enriched Uranium System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKenzie, IV, George Espy; Goda, Joetta Marie; Grove, Travis Justin

    This paper examines the comparison of MCNP® code’s capability to calculate kinetics parameters effectively for a thermal system containing highly enriched uranium (HEU). The Rossi-α parameter was chosen for this examination because it is relatively easy to measure as well as easy to calculate using MCNP®’s kopts card. The Rossi-α also incorporates many other parameters of interest in nuclear kinetics most of which are more difficult to precisely measure. The comparison looks at two different nuclear data libraries for comparison to the experimental data. These libraries are ENDF/BVI (.66c) and ENDF/BVII (.80c).

  8. PYFLOW_2.0: a computer program for calculating flow properties and impact parameters of past dilute pyroclastic density currents based on field data

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Mele, Daniela

    2018-03-01

    This paper presents PYFLOW_2.0, a hazard tool for the calculation of the impact parameters of dilute pyroclastic density currents (DPDCs). DPDCs represent the dilute turbulent type of gravity flows that occur during explosive volcanic eruptions; their hazard is the result of their mobility and the capability to laterally impact buildings and infrastructures and to transport variable amounts of volcanic ash along the path. Starting from data coming from the analysis of deposits formed by DPDCs, PYFLOW_2.0 calculates the flow properties (e.g., velocity, bulk density, thickness) and impact parameters (dynamic pressure, deposition time) at the location of the sampled outcrop. Given the inherent uncertainties related to sampling, laboratory analyses, and modeling assumptions, the program provides ranges of variations and probability density functions of the impact parameters rather than single specific values; from these functions, the user can interrogate the program to obtain the value of the computed impact parameter at any specified exceedance probability. In this paper, the sedimentological models implemented in PYFLOW_2.0 are presented, program functionalities are briefly introduced, and two application examples are discussed so as to show the capabilities of the software in quantifying the impact of the analyzed DPDCs in terms of dynamic pressure, volcanic ash concentration, and residence time in the atmosphere. The software and user's manual are made available as a downloadable electronic supplement.

  9. Assessment of narrow angles by gonioscopy, Van Herick method and anterior segment optical coherence tomography.

    PubMed

    Park, Seong Bae; Sung, Kyung Rim; Kang, Sung Yung; Jo, Jung Woo; Lee, Kyoung Sub; Kook, Michael S

    2011-07-01

    To evaluate anterior chamber (AC) angles using gonioscopy, Van Herick technique and anterior segment optical coherence tomography (AS-OCT). One hundred forty-eight consecutive subjects were enrolled. The agreement between any two of three diagnostic methods, gonioscopy, AS-OCT and Van Herick, was calculated in narrow-angle patients. The area under receiver-operating characteristic curves (AUC) for discriminating between narrow and open angles determined by gonioscopy was calculated in all participants for AS-OCT parameter angle opening distance (AOD), angle recess area, trabecular iris surface area and anterior chamber depth (ACD). As a subgroup analysis, capability of AS-OCT parameters for detecting angle closure defined by AS-OCT was assessed in narrow-angle patients. The agreement between the Van Herick method and gonioscopy in detecting angle closure was excellent in narrow angles (κ = 0.80, temporal; κ = 0.82, nasal). However, agreement between gonioscopy and AS-OCT and between the Van Herick method and AS-OCT was poor (κ = 0.11-0.16). Discrimination capability of AS-OCT parameters between open and narrow angles determined by gonioscopy was excellent for all AS-OCT parameters (AUC, temporal: AOD500 = 0.96, nasal: AOD500 = 0.99). The AUCs for detecting angle closure defined by AS-OCT image in narrow angle subjects was good for all AS-OCT parameters (AUC, 0.80-0.94) except for ACD (temporal: ACD = 0.70, nasal: ACD = 0.63). Assessment of narrow angles by gonioscopy and the Van Herick technique showed good agreement, but both measurements revealed poor agreement with AS-OCT. The angle closure detection capability of AS-OCT parameters was excellent; however, it was slightly lower in ACD.

  10. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-01

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  11. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids.

    PubMed

    Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-16

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  12. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    USGS Publications Warehouse

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  13. Optical Imaging and Radiometric Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge diffusion modulation transfer function (MTF).

  14. A Novel TRM Calculation Method by Probabilistic Concept

    NASA Astrophysics Data System (ADS)

    Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki

    In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.

  15. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  16. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  17. Experimental investigation of effective parameters on signal enhancement in spark assisted laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Hassanimatin, M. M.; Tavassoli, S. H.

    2018-05-01

    A combination of electrical spark and laser induced breakdown spectroscopy (LIBS), which is called spark assisted LIBS (SA-LIBS), has shown its capability in plasma spectral emission enhancement. The aim of this paper is a detailed study of plasma emission to determine the effect of plasma and experimental parameters on increasing the spectral signal. An enhancement ratio of SA-LIBS spectral lines compared with LIBS is theoretically introduced. The parameters affecting the spectral enhancement ratio including ablated mass, plasma temperature, the lifetime of neutral and ionic spectral lines, plasma volume, and electron density are experimentally investigated and discussed. By substitution of the effective parameters, the theoretical spectral enhancement ratio is calculated and compared with the experimental one. Two samples of granite as a dielectric and aluminum as a metal at different laser pulse energies are studied. There is a good agreement between the calculated and the experimental enhancement ratio.

  18. QMCPACK : an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    DOE PAGES

    Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.; ...

    2018-04-19

    QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less

  19. QMCPACK : an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.

    QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less

  20. Transport properties of mixtures by the soft-SAFT + free-volume theory: application to mixtures of n-alkanes and hydrofluorocarbons.

    PubMed

    Llovell, F; Marcos, R M; Vega, L F

    2013-05-02

    In a previous paper (Llovell et al. J. Phys. Chem. B, submitted for publication), the free-volume theory (FVT) was coupled with the soft-SAFT equation of state for the first time to extend the capabilities of the equation to the calculation of transport properties. The equation was tested with molecular simulations and applied to the family of n-alkanes. The capability of the soft-SAFT + FVT treatment is extended here to other chemical families and mixtures. The compositional rules of Wilke (Wilke, C. R. J. Chem. Phys. 1950, 18, 517-519) are used for the diluted term of the viscosity, while the dense term is evaluated using very simple mixing rules to calculate the viscosity parameters. The theory is then used to predict the vapor-liquid equilibrium and the viscosity of mixtures of nonassociating and associating compounds. The approach is applied to determine the viscosity of a selected group of hydrofluorocarbons, in a similar manner as previously done for n-alkanes. The soft-SAFT molecular parameters are taken from a previous work, fitted to vapor-liquid equilibria experimental data. The application of FVT requires three additional parameters related to the viscosity of the pure fluid. Using a transferable approach, the α parameter is taken from the equivalent n-alkane, while the remaining two parameters B and Lv are fitted to viscosity data of the pure fluid at several isobars. The effect of these parameters is then investigated and compared to those obtained for n-alkanes, in order to better understand their effect on the calculations. Once the pure fluids are well characterized, the vapor-liquid equilibrium and the viscosity of nonassociating and associating mixtures, including n-alkane + n-alkane, hydrofluorocarbon + hydrofluorocarbon, and n-alkane + hydrofluorocarbon mixtures, are calculated. One or two binary parameters are used to account for deviations in the vapor-liquid equilibrium diagram for nonideal mixtures; these parameters are used in a transferable manner to predict the viscosity of the mixtures. Very good agreement with available experimental data is found in all cases, with an average absolute deviation ranging between 1.0% and 5.5%, even when the system presents azeotropy, reinforcing the robustness of the approach.

  1. World Wide Web-based system for the calculation of substituent parameters and substituent similarity searches.

    PubMed

    Ertl, P

    1998-02-01

    Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.

  2. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  3. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  4. Spectrum Orbit Utilization Program documentation: SOUP5 version 3.8 user's manual, volume 1, chapters 1 through 5

    NASA Technical Reports Server (NTRS)

    Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.

    1985-01-01

    The underlying engineering and mathematical models as well as the computational methods used by the Spectrum Orbit Utilization Program 5 (SOUP5) analysis programs are described. Included are the algorithms used to calculate the technical parameters, and references to the technical literature. The organization, capabilities, processing sequences, and processing and data options of the SOUP5 system are described. The details of the geometric calculations are given. Also discussed are the various antenna gain algorithms; rain attenuation and depolarization calculations; calculations of transmitter power and received power flux density; channelization options, interference categories, and protection ratio calculation; generation of aggregrate interference and margins; equivalent gain calculations; and how to enter a protection ratio template.

  5. Minimum velocity necessary for nonconventional projectiles to penetrate the eye: an experimental study using pig eyes.

    PubMed

    Marshall, John W; Dahlstrom, Dean B; Powley, Kramer D

    2011-06-01

    To satisfy the Criminal Code of Canada's definition of a firearm, a barreled weapon must be capable of causing serious bodily injury or death to a person. Canadian courts have accepted the forensically established criteria of "penetration or rupture of an eye" as serious bodily injury. The minimal velocity of nonconventional ammunition required to penetrate the eye including airsoft projectiles has yet to be established. To establish minimal threshold requirements for eye penetration, empirical tests were conducted using a variety of airsoft projectiles. Using the data obtained from these tests, and previous research using "air gun" projectiles, an "energy density" parameter was calculated for the minimum penetration threshold of an eye. Airsoft guns capable of achieving velocities in excess of 99 m/s (325 ft/s) using conventional 6-mm airsoft ammunition will satisfy the forensically established criteria of "serious bodily injury." The energy density parameter for typical 6-mm plastic airsoft projectiles is 4.3 to 4.8 J/cm². This calculation also encompasses 4.5-mm steel BBs.

  6. Fluidized bed regenerators for Brayton cycles

    NASA Technical Reports Server (NTRS)

    Nichols, L. D.

    1975-01-01

    A recuperator consisting of two fluidized bed regenerators with circulating solid particles is considered for use in a Brayton cycle. These fluidized beds offer the possibility of high temperature operation if ceramic particles are used. Calculations of the efficiency and size of fluidized bed regenerators for typical values of operating parameters were made and compared to a shell and tube recuperator. The calculations indicate that the fluidized beds will be more compact than the shell and tube as well as offering a high temperature operating capability.

  7. Network capability estimation. Vela network evaluation and automatic processing research. Technical report. [NETWORTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, N.S.

    1976-09-24

    NETWORTH is a computer program which calculates the detection and location capability of seismic networks. A modified version of NETWORTH has been developed. This program has been used to evaluate the effect of station 'downtime', the signal amplitude variance, and the station detection threshold upon network detection capability. In this version all parameters may be changed separately for individual stations. The capability of using signal amplitude corrections has been added. The function of amplitude corrections is to remove possible bias in the magnitude estimate due to inhomogeneous signal attenuation. These corrections may be applied to individual stations, individual epicenters, ormore » individual station/epicenter combinations. An option has been added to calculate the effect of station 'downtime' upon network capability. This study indicates that, if capability loss due to detection errors can be minimized, then station detection threshold and station reliability will be the fundamental limits to network performance. A baseline network of thirteen stations has been performed. These stations are as follows: Alaskan Long Period Array, (ALPA); Ankara, (ANK); Chiang Mai, (CHG); Korean Seismic Research Station, (KSRS); Large Aperture Seismic Array, (LASA); Mashhad, (MSH); Mundaring, (MUN); Norwegian Seismic Array, (NORSAR); New Delhi, (NWDEL); Red Knife, Ontario, (RK-ON); Shillong, (SHL); Taipei, (TAP); and White Horse, Yukon, (WH-YK).« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  9. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  10. rFRET: A comprehensive, Matlab-based program for analyzing intensity-based ratiometric microscopic FRET experiments.

    PubMed

    Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János

    2016-04-01

    Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  11. Calculated and measured brachytherapy dosimetry parameters in water for the Xoft Axxent X-Ray Source: an electronic brachytherapy source.

    PubMed

    Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve

    2006-11-01

    A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1

  12. Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities

    NASA Technical Reports Server (NTRS)

    Richter, Hanz

    2004-01-01

    A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.

  13. CHEETAH: A fast thermochemical code for detonation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producingmore » a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.« less

  14. User's guide to PHREEQC (Version 2) : a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    USGS Publications Warehouse

    Parkhurst, David L.; Appelo, C.A.J.

    1999-01-01

    PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.

  15. Solid Aerosol Program

    DTIC Science & Technology

    1993-04-01

    not to be construed as an official Department of the Army position unless so designated by other authorizing documents. REPORT DOCUMENTATION PAGE...parameter sensitivity studies, and test procedure design . An experimental system providing reaL data on the parametters relevant to the calculations has been...experimental program was designed to exploit as much of the existing capabilities of the Ventilation Kinetics group as possible while keeping in mind

  16. A separate universe view of the asymmetric sky

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takeshi; Cortês, Marina; Liddle, Andrew R.

    2015-05-01

    We provide a unified description of the hemispherical asymmetry in the cosmic microwave background generated by the mechanism proposed by Erickcek, Kamionkowski, and Carroll, using a δ Script N formalism that consistently accounts for the asymmetry-generating mode throughout. We derive a general form for the power spectrum which explicitly exhibits the broken translational invariance. This can be directly compared to cosmic microwave background observables, including the observed quadrupole and fNL values, automatically incorporating the Grishchuk-Zel'dovich effect. Our calculation unifies and extends previous calculations in the literature, in particular giving the full dependence of observables on the phase of our location in the super-horizon mode that generates the asymmetry. We demonstrate how the apparently different results obtained by previous authors arise as different limiting cases. We confirm the existence of non-linear contributions to the microwave background quadrupole from the super-horizon mode identified by Erickcek et al. and further explored by Kanno et al., and show that those contributions are always significant in parameter regimes capable of explaining the observed asymmetry. We indicate example parameter values capable of explaining the observed power asymmetry without violating other observational bounds.

  17. Kinetic Monte Carlo Study of Li Intercalation in LiFePO4.

    PubMed

    Xiao, Penghao; Henkelman, Graeme

    2018-01-23

    Even as a commercial cathode material, LiFePO 4 remains of tremendous research interest for understanding Li intercalation dynamics. The partially lithiated material spontaneously separates into Li-poor and Li-rich phases at equilibrium. Phase segregation is a surprising property of LiFePO 4 given its high measured rate capability. Previous theoretical studies, aiming to describe Li intercalation in LiFePO 4 , include both atomic-scale density functional theory (DFT) calculations of static Li distributions and entire-particle-scale phase field models, based upon empirical parameters, studying the dynamics of the phase separation. Little effort has been made to bridge the gap between these two scales. In this work, DFT calculations are used to fit a cluster expansion for the basis of kinetic Monte Carlo calculations, which enables long time scale simulations with accurate atomic interactions. This atomistic model shows how the phases evolve in Li x FePO 4 without parameters from experiments. Our simulations reveal that an ordered Li 0.5 FePO4 phase with alternating Li-rich and Li-poor planes along the ac direction forms between the LiFePO 4 and FePO 4 phases, which is consistent with recent X-ray diffraction experiments showing peaks associated with an intermediate-Li phase. The calculations also help to explain a recent puzzling experiment showing that LiFePO 4 particles with high aspect ratios that are narrower along the [100] direction, perpendicular to the [010] Li diffusion channels, actually have better rate capabilities. Our calculations show that lateral surfaces parallel to the Li diffusion channels, as well as other preexisting sites that bind Li weakly, are important for phase nucleation and rapid cycling performance.

  18. Content dependent selection of image enhancement parameters for mobile displays

    NASA Astrophysics Data System (ADS)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  19. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  20. Searching the Force Field Electrostatic Multipole Parameter Space.

    PubMed

    Jakobsen, Sofie; Jensen, Frank

    2016-04-12

    We show by tensor decomposition analyses that the molecular electrostatic potential for amino acid peptide models has an effective rank less than twice the number of atoms. This rank indicates the number of parameters that can be derived from the electrostatic potential in a statistically significant way. Using this as a guideline, we investigate different strategies for deriving a reduced set of atomic charges, dipoles, and quadrupoles capable of reproducing the reference electrostatic potential with a low error. A full combinatorial search of selected parameter subspaces for N-methylacetamide and a cysteine peptide model indicates that there are many different parameter sets capable of providing errors close to that of the global minimum. Among the different reduced multipole parameter sets that have low errors, there is consensus that atoms involved in π-bonding require higher order multipole moments. The possible correlation between multipole parameters is investigated by exhaustive searches of combinations of up to four parameters distributed in all possible ways on all possible atomic sites. These analyses show that there is no advantage in considering combinations of multipoles compared to a simple approach where the importance of each multipole moment is evaluated sequentially. When combined with possible weighting factors related to the computational efficiency of each type of multipole moment, this may provide a systematic strategy for determining a computational efficient representation of the electrostatic component in force field calculations.

  1. Förster resonance energy transfer, absorption and emission spectra in multichromophoric systems. III. Exact stochastic path integral evaluation.

    PubMed

    Moix, Jeremy M; Ma, Jian; Cao, Jianshu

    2015-03-07

    A numerically exact path integral treatment of the absorption and emission spectra of open quantum systems is presented that requires only the straightforward solution of a stochastic differential equation. The approach converges rapidly enabling the calculation of spectra of large excitonic systems across the complete range of system parameters and for arbitrary bath spectral densities. With the numerically exact absorption and emission operators, one can also immediately compute energy transfer rates using the multi-chromophoric Förster resonant energy transfer formalism. Benchmark calculations on the emission spectra of two level systems are presented demonstrating the efficacy of the stochastic approach. This is followed by calculations of the energy transfer rates between two weakly coupled dimer systems as a function of temperature and system-bath coupling strength. It is shown that the recently developed hybrid cumulant expansion (see Paper II) is the only perturbative method capable of generating uniformly reliable energy transfer rates and emission spectra across a broad range of system parameters.

  2. A Backscatter-Lidar Forward-Operator

    NASA Astrophysics Data System (ADS)

    Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland

    2015-04-01

    We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.

  3. SU-E-T-113: Dose Distribution Using Respiratory Signals and Machine Parameters During Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imae, T; Haga, A; Saotome, N

    Purpose: Volumetric modulated arc therapy (VMAT) is a rotational intensity-modulated radiotherapy (IMRT) technique capable of acquiring projection images during treatment. Treatment plans for lung tumors using stereotactic body radiotherapy (SBRT) are calculated with planning computed tomography (CT) images only exhale phase. Purpose of this study is to evaluate dose distribution by reconstructing from only the data such as respiratory signals and machine parameters acquired during treatment. Methods: Phantom and three patients with lung tumor underwent CT scans for treatment planning. They were treated by VMAT while acquiring projection images to derive their respiratory signals and machine parameters including positions ofmore » multi leaf collimators, dose rates and integrated monitor units. The respiratory signals were divided into 4 and 10 phases and machine parameters were correlated with the divided respiratory signals based on the gantry angle. Dose distributions of each respiratory phase were calculated from plans which were reconstructed from the respiratory signals and the machine parameters during treatment. The doses at isocenter, maximum point and the centroid of target were evaluated. Results and Discussion: Dose distributions during treatment were calculated using the machine parameters and the respiratory signals detected from projection images. Maximum dose difference between plan and in treatment distribution was −1.8±0.4% at centroid of target and dose differences of evaluated points between 4 and 10 phases were no significant. Conclusion: The present method successfully evaluated dose distribution using respiratory signals and machine parameters during treatment. This method is feasible to verify the actual dose for moving target.« less

  4. Analytical calculation on the determination of steep side wall angles from far field measurements

    NASA Astrophysics Data System (ADS)

    Cisotto, Luca; Pereira, Silvania F.; Urbach, H. Paul

    2018-06-01

    In the semiconductor industry, the performance and capabilities of the lithographic process are evaluated by measuring specific structures. These structures are often gratings of which the shape is described by a few parameters such as period, middle critical dimension, height, and side wall angle (SWA). Upon direct measurement or retrieval of these parameters, the determination of the SWA suffers from considerable inaccuracies. Although the scattering effects that steep SWAs have on the illumination can be obtained with rigorous numerical simulations, analytical models constitute a very useful tool to get insights into the problem we are treating. In this paper, we develop an approach based on analytical calculations to describe the scattering of a cliff and a ridge with steep SWAs. We also propose a detection system to determine the SWAs of the structures.

  5. Density functional theory of freezing of a system of highly elongated ellipsoidal oligomer solutions

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shikha; Mishra, Pankaj

    2017-05-01

    We have used the density functional theory of freezing to study the liquid crystalline phase behavior of a system of highly elongated ellipsoidal conjugated oligomers dispersed in three different solvents namely chloroform, toluene and their equimolar mixture. The molecules are assumed to interact via solvent-implicit coarse-grained Gay-Berne potential. Pair correlation functions needed as input in the density functional theory have been calculated using the Percus-Yevick (PY) integral equation theory. Considering the isotropic and nematic phases, we have calculated the isotropic-nematic phase transition parameters and presented the temperature-density and pressure-temperature phase diagrams. Different solvent conditions are found not only to affect the transition parameters but also determine the capability of oligomers to form nematic phase in various thermodynamic conditions. In principle, our results are verifiable through computer simulations.

  6. SU-E-J-199: A Software Tool for Quality Assurance of Online Replanning with MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Ahunbay, E; Li, X

    2015-06-15

    Purpose: To develop a quality assurance software tool, ArtQA, capable of automatically checking radiation treatment plan parameters, verifying plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary MU calculation considering the effect of magnetic field from MR-Linac, and verifying the delivery and plan consistency, for online replanning. Methods: ArtQA was developed by creating interfaces to TPS (e.g., Monaco, Elekta), R&V system (Mosaiq, Elekta), and secondary MU calculation system. The tool obtains plan parameters from the TPS via direct file reading, and retrieves plan data both transferred from TPS and recorded during themore » actual delivery in the R&V system database via open database connectivity and structured query language. By comparing beam/plan datasets in different systems, ArtQA detects and outputs discrepancies between TPS, R&V system and secondary MU calculation system, and delivery. To consider the effect of 1.5T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA is capable of automatically checking plan integrity and logic consistency, detecting plan data transfer errors, performing secondary MU calculations with or without a transverse magnetic field, and verifying treatment delivery. The tool is efficient and effective for pre- and post-treatment QA checks of all available treatment parameters that may be impractical with the commonly-used visual inspection. Conclusion: The software tool ArtQA can be used for quick and automatic pre- and post-treatment QA check, eliminating human error associated with visual inspection. While this tool is developed for online replanning to be used on MR-Linac, where the QA needs to be performed rapidly as the patient is lying on the table waiting for the treatment, ArtQA can be used as a general QA tool in radiation oncology practice. This work is partially supported by Elekta Inc.« less

  7. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    PubMed

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Blood viscosity monitoring during cardiopulmonary bypass based on pressure-flow characteristics of a Newtonian fluid.

    PubMed

    Okahara, Shigeyuki; Zu Soh; Takahashi, Shinya; Sueda, Taijiro; Tsuji, Toshio

    2016-08-01

    We proposed a blood viscosity estimation method based on pressure-flow characteristics of oxygenators used during cardiopulmonary bypass (CPB) in a previous study that showed the estimated viscosity to correlate well with the measured viscosity. However, the determination of the parameters included in the method required the use of blood, thereby leading to high cost of calibration. Therefore, in this study we propose a new method to monitor blood viscosity, which approximates the pressure-flow characteristics of blood considered as a non-Newtonian fluid with characteristics of a Newtonian fluid by using the parameters derived from glycerin solution to enable ease of acquisition. Because parameters used in the estimation method are based on fluid types, bovine blood parameters were used to calculate estimated viscosity (ηe), and glycerin parameters were used to estimate deemed viscosity (ηdeem). Three samples of whole bovine blood with different hematocrit levels (21.8%, 31.0%, and 39.8%) were prepared and perfused into the oxygenator. As the temperature changed from 37 °C to 27 °C, the oxygenator mean inlet pressure and outlet pressure were recorded for flows of 2 L/min and 4 L/min, and the viscosity was estimated. The value of deemed viscosity calculated with the glycerin parameters was lower than estimated viscosity calculated with bovine blood parameters by 20-33% at 21.8% hematocrit, 12-27% at 31.0% hematocrit, and 10-15% at 39.8% hematocrit. Furthermore, deemed viscosity was lower than estimated viscosity by 10-30% at 2 L/min and 30-40% at 4 L/min. Nevertheless, estimated and deemed viscosities varied with a similar slope. Therefore, this shows that deemed viscosity achieved using glycerin parameters may be capable of successfully monitoring relative viscosity changes of blood in a perfusing oxygenator.

  9. Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, Madeline Louise; McMath, Garrett Earl

    Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less

  10. Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries

    DOE PAGES

    Lockhart, Madeline Louise; McMath, Garrett Earl

    2017-10-26

    Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less

  11. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  12. The investigation of a compact auto-connected wire-wrapped pulsed transformer

    NASA Astrophysics Data System (ADS)

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Zhang, Tianyang

    2012-05-01

    For the power conditioning circuit used to deliver power efficiently from flux compression generator (FCG) to the load with high impedance, an air-cored and wire-wrapped transformer convenient in coaxial connection to the other parts is investigated. To reduce the size and enhance the performance, an auto-connection is adopted. A fast and simple model is used to calculate the electrical parameters of the transformer. To evaluate the high voltage capability, the voltages across turns and the electric field distribution in the transformer are investigated. The calculated and the measured electrical parameters of the transformer show good agreements. And the safe operating voltage is predicted to exceed 500 kV. In the preliminary experiments, the transformer is tested in a power conditioning circuit with a capacitive power supply. It is demonstrated that the output voltage of the transformer reaches -342 kV under the input voltage of -81 kV.

  13. The investigation of a compact auto-connected wire-wrapped pulsed transformer.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Zhang, Tianyang

    2012-05-01

    For the power conditioning circuit used to deliver power efficiently from flux compression generator (FCG) to the load with high impedance, an air-cored and wire-wrapped transformer convenient in coaxial connection to the other parts is investigated. To reduce the size and enhance the performance, an auto-connection is adopted. A fast and simple model is used to calculate the electrical parameters of the transformer. To evaluate the high voltage capability, the voltages across turns and the electric field distribution in the transformer are investigated. The calculated and the measured electrical parameters of the transformer show good agreements. And the safe operating voltage is predicted to exceed 500 kV. In the preliminary experiments, the transformer is tested in a power conditioning circuit with a capacitive power supply. It is demonstrated that the output voltage of the transformer reaches -342 kV under the input voltage of -81 kV.

  14. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  15. Numerical calculation of ion polarization in the NICA collider

    NASA Astrophysics Data System (ADS)

    Kovalenko, A. D.; Butenko, A. V.; Kekelidze, V. D.; Mikhaylov, V. A.; Kondratenko, M. A.; Kondratenko, A. M.; Filatov, Yu N.

    2016-02-01

    The NICA Collider with two solenoid Siberian snakes is “transparent” to the spin. The collider transparent to the spin provides a unique capability to control any polarization direction of protons and deuterons using additional weak solenoids without affecting orbital parameters of the beam. The spin tune induced by the control solenoids must significantly exceed the strength of the zero-integer spin resonance, which contains a coherent part associated with errors in the collider's magnetic structure and an incoherent part associated with the beam emittances. We present calculations of the coherent part of the resonance strength in the NICA collider for proton and deuteron beams.

  16. The System of Simulation and Multi-objective Optimization for the Roller Kiln

    NASA Astrophysics Data System (ADS)

    Huang, He; Chen, Xishen; Li, Wugang; Li, Zhuoqiu

    It is somewhat a difficult researching problem, to get the building parameters of the ceramic roller kiln simulation model. A system integrated of evolutionary algorithms (PSO, DE and DEPSO) and computational fluid dynamics (CFD), is proposed to solve the problem. And the temperature field uniformity and the environment disruption are studied in this paper. With the help of the efficient parallel calculation, the ceramic roller kiln temperature field uniformity and the NOx emissions field have been researched in the system at the same time. A multi-objective optimization example of the industrial roller kiln proves that the system is of excellent parameter exploration capability.

  17. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  18. Development of Standard Fuel Models in Boreal Forests of Northeast China through Calibration and Validation

    PubMed Central

    Cai, Longyan; He, Hong S.; Wu, Zhiwei; Lewis, Benard L.; Liang, Yu

    2014-01-01

    Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164

  19. Research on capability of detecting ballistic missile by near space infrared system

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Jiang, Wei; Jiang, Feng

    2018-01-01

    The infrared detection technology of ballistic missile based on near space platform can effectively make up the shortcomings of high-cost of traditional early warning satellites and the limited earth curvature of ground-based early warning radar. In terms of target detection capability, aiming at the problem that the formula of the action distance based on contrast performance ignores the background emissivity in the calculation process and the formula is only valid for the monochromatic light, an improved formula of the detecting range based on contrast performance is proposed. The near space infrared imaging system parameters are introduced, the expression of the contrastive action distance formula based on the target detection of the near space platform is deduced. The detection range of the near space infrared system for the booster stage ballistic missile skin, the tail nozzle and the tail flame is calculated. The simulation results show that the near-space infrared system has the best effect on the detection of tail-flame radiation.

  20. ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.; Viterna, Larry A.

    1991-01-01

    A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.

  1. CHAM: weak signals detection through a new multivariate algorithm for process control

    NASA Astrophysics Data System (ADS)

    Bergeret, François; Soual, Carole; Le Gratiet, B.

    2016-10-01

    Derivatives technologies based on core CMOS processes are significantly aggressive in term of design rules and process control requirements. Process control plan is a derived from Process Assumption (PA) calculations which result in a design rule based on known process variability capabilities, taking into account enough margin to be safe not only for yield but especially for reliability. Even though process assumptions are calculated with a 4 sigma known process capability margin, efficient and competitive designs are challenging the process especially for derivatives technologies in 40 and 28nm nodes. For wafer fab process control, PA are declined in monovariate (layer1 CD, layer2 CD, layer2 to layer1 overlay, layer3 CD etc….) control charts with appropriated specifications and control limits which all together are securing the silicon. This is so far working fine but such system is not really sensitive to weak signals coming from interactions of multiple key parameters (high layer2 CD combined with high layer3 CD as an example). CHAM is a software using an advanced statistical algorithm specifically designed to detect small signals, especially when there are many parameters to control and when the parameters can interact to create yield issues. In this presentation we will first present the CHAM algorithm, then the case-study on critical dimensions, with the results, and we will conclude on future work. This partnership between Ippon and STM is part of E450LMDAP, European project dedicated to metrology and lithography development for future technology nodes, especially 10nm.

  2. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  3. Electrostatic potential calculation for biomolecules--creating a database of pre-calculated values reported on a per residue basis for all PDB protein structures.

    PubMed

    Rocchia, W; Neshich, G

    2007-10-05

    STING and Java Protein Dossier provide a collection of physical-chemical parameters, describing protein structure, stability, function, and interaction, considered one of the most comprehensive among the available protein databases of similar type. Particular attention in STING is paid to the electrostatic potential. It makes use of DelPhi, a well-known tool that calculates this physical-chemical quantity for biomolecules by solving the Poisson Boltzmann equation. In this paper, we describe a modification to the DelPhi program aimed at integrating it within the STING environment. We also outline how the "amino acid electrostatic potential" and the "surface amino acid electrostatic potential" are calculated (over all Protein Data Bank (PDB) content) and how the corresponding values are made searchable in STING_DB. In addition, we show that the STING and Java Protein Dossier are also capable of providing these particular parameter values for the analysis of protein structures modeled in computers or being experimentally solved, but not yet deposited in the PDB. Furthermore, we compare the calculated electrostatic potential values obtained by using the earlier version of DelPhi and those by STING, for the biologically relevant case of lysozyme-antibody interaction. Finally, we describe the STING capacity to make queries (at both residue and atomic levels) across the whole PDB, by looking at a specific case where the electrostatic potential parameter plays a crucial role in terms of a particular protein function, such as ligand binding. BlueStar STING is available at http://www.cbi.cnptia.embrapa.br.

  4. Underground Mining Method Selection Using WPM and PROMETHEE

    NASA Astrophysics Data System (ADS)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  5. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  6. Earthquake Early Warning: New Strategies for Seismic Hardware

    NASA Astrophysics Data System (ADS)

    Allardice, S.; Hill, P.

    2017-12-01

    Implementing Earthquake Early Warning System (EEWS) triggering algorithms into seismic networks has been a hot topic of discussion for some years now. With digitizer technology now available, such as the Güralp Minimus, with on average 40-60ms delay time (latency) from earthquake origin to issuing an alert the next step is to provide network operators with a simple interface for on board parameter calculations from a seismic station. A voting mechanism is implemented on board which mitigates the risk of false positives being communicated. Each Minimus can be configured to with a `score' from various sources i.e. Z channel on seismometer, N/S E/W channels on accelerometer and MEMS inside Minimus. If the score exceeds the set threshold then an alert is sent to the `Master Minimus'. The Master Minimus within the network will also be configured as to when the alert should be issued i.e. at least 3 stations must have triggered. Industry standard algorithms focus around the calculation of Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV), Peak Ground Displacement (PGD) and C. Calculating these single station parameters on-board in order to stream only the results could help network operators with possible issues, such as restricted bandwidth. Developments on the Minimus allow these parameters to be calculated and distributed through Common Alert Protocol (CAP). CAP is the XML based data format used for exchanging and describing public warnings and emergencies. Whenever the trigger conditions are met the Minimus can send a signed UDP packet to the configured CAP receiver which can then send the alert via SMS, e-mail or CAP forwarding. Increasing network redundancy is also a consideration when developing these features, therefore the forwarding CAP message can be sent to multiple destinations. This allows for a hierarchical approach by which the single station (or network) parameters can be streamed to another Minimus, or data centre, or both, so that there is no one single point of failure. Developments on the Guralp Minimus to calculate these on board parameters which are capable of streaming single station parameters, accompanied with the ultra-low latency is the next generation of EEWS and Güralps contribution to the community.

  7. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sales, J. S.; Silva, L. F. da; Almeida, N. G. de

    2011-03-15

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  8. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    NASA Astrophysics Data System (ADS)

    Sales, J. S.; da Silva, L. F.; de Almeida, N. G.

    2011-03-01

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  9. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  10. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 3: Demonstration problems

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    Program NSEG is a rapid mission analysis code based on the use of approximate flight path equations of motion. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelope performance mapping capabilities. For example, rate-of-climb, turn rates, and energy maneuverability parameter values may be mapped in the Mach-altitude plane. Approximate take off and landing analyses are also performed. At high speeds, centrifugal lift effects are accounted for. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  11. [Research on identification of species of fruit trees by spectral analysis].

    PubMed

    Xing, Dong-Xing; Chang, Qing-Rui

    2009-07-01

    Using the spectral reflectance data (R2) of canopies, the present paper identifies seven species of fruit trees bearing fruit in the fruit mature period. Firstly, it compares the fruit tree species identification capability of six kinds of satellite sensors and four kinds of vegetation index through re-sampling the spectral data with six kinds of pre-defined filter function and the related data processing of calculating vegetation indexes. Then, it structures a BP neural network model for identifying seven species of fruit trees on the basis of choosing the best transformation of R(lambda) and optimizing the model parameters. The main conclusions are: (1) the order of the identification capability of the six kinds of satellite sensors from strong to weak is: MODIS, ASTER, ETM+, HRG, QUICKBIRD and IKONOS; (2) among the four kinds of vegetation indexes, the identification capability of RVI is the most powerful, the next is NDVI, while the identification capability of SAVI or DVI is relatively weak; (3) The identification capability of RVI and NDVI calculated with the reflectance of near-infrared and red channels of ETM+ or MODIS sensor is relatively powerful; (4) Among R(lambda) and its 22 kinds of transformation data, d1 [log(1/R(lambda))](derivative gap is set 9 nm) is the best transformation for structuring BP neural network model; (5) The paper structures a 3-layer BP neural network model for identifying seven species of fruit trees using the best transformation of R(lambda) which is d1 [log(1/R(lambda))](derivative gap is set 9 nm).

  12. Investigating the Energetic Ordering of Stable and Metastable TiO 2 Polymorphs Using DFT+ U and Hybrid Functionals

    DOE PAGES

    Curnan, Matthew T.; Kitchin, John R.

    2015-08-12

    Prediction of transition metal oxide BO 2 (B = Ti, V, etc.) polymorph energetic properties is critical to tunable material design and identifying thermodynamically accessible structures. Determining procedures capable of synthesizing particular polymorphs minimally requires prior knowledge of their relative energetic favorability. Information concerning TiO 2 polymorph relative energetic favorability has been ascertained from experimental research. In this study, the consistency of first-principles predictions and experimental results involving the relative energetic ordering of stable (rutile), metastable (anatase and brookite), and unstable (columbite) TiO 2 polymorphs is assessed via density functional theory (DFT). Considering the issues involving electron–electron interaction and chargemore » delocalization in TiO 2 calculations, relative energetic ordering predictions are evaluated over trends varying Ti Hubbard U 3d or exact exchange fraction parameter values. Energetic trends formed from varying U 3d predict experimentally consistent energetic ordering over U 3d intervals when using GGA-based functionals, regardless of pseudopotential selection. Given pertinent linear response calculated Hubbard U values, these results enable TiO 2 polymorph energetic ordering prediction. Here, the hybrid functional calculations involving rutile–anatase relative energetics, though demonstrating experimentally consistent energetic ordering over exact exchange fraction ranges, are not accompanied by predicted fractions, for a first-principles methodology capable of calculating exact exchange fractions precisely predicting TiO 2 polymorph energetic ordering is not available.« less

  13. Diagnosis of dynamic process over rainband of landfall typhoon

    NASA Astrophysics Data System (ADS)

    Ran, Ling-Kun; Yang, Wen-Xia; Chu, Yan-Li

    2010-07-01

    This paper introduces a new physical parameter — thermodynamic shear advection parameter combining the perturbation vertical component of convective vorticity vector with the coupling of horizontal divergence perturbation and vertical gradient of general potential temperature perturbation. For a heavy-rainfall event resulting from the landfall typhoon 'Wipha', the parameter is calculated by using National Centres for Enviromental Prediction/National Centre for Atmospheric Research global final analysis data. The results showed that the parameter corresponds to the observed 6 h accumulative rainband since it is capable of catching hold of the dynamic and thermodynamic disturbance in the lower troposphere over the observed rainband. Before the typhoon landed, the advection of the parameter by basic-state flow and the coupling of general potential temperature perturbation with curl of Coriolis force perturbation are the primary dynamic processes which are responsible for the local change of the parameter. After the typhoon landed, the disturbance is mainly driven by the combination of five primary dynamic processes. The advection of the parameter by basic-state flow was weakened after the typhoon landed.

  14. MF/HF Multistatic Mid-Ocean Radar Experiments in Support of SWOTHR (surface-Wave Over-the-Horizon Radar)

    DTIC Science & Technology

    1989-09-16

    SWOTHR was conceived to be an organic asset capable of providing early detection and tracking of fast , surface-skimming threats, such as cruise missiles...distributed real-time processing and threat tracking system. Spe- cific project goals were to verify detection performance pree ctions for small, fast targets...means that enlarging the ground plane would have been a fruitless excercise in any event. B-6 5 i I U Table B-1 summarizes the calculated parameters of

  15. EGADS: A microcomputer program for estimating the aerodynamic performance of general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Melton, John E.

    1994-01-01

    EGADS is a comprehensive preliminary design tool for estimating the performance of light, single-engine general aviation aircraft. The software runs on the Apple Macintosh series of personal computers and assists amateur designers and aeronautical engineering students in performing the many repetitive calculations required in the aircraft design process. The program makes full use of the mouse and standard Macintosh interface techniques to simplify the input of various design parameters. Extensive graphics, plotting, and text output capabilities are also included.

  16. User’s Manual for SEEK TALK Full Scale Engineering Development Life Cycle Cost (LCC) Model. Volume II. Model Equations and Model Operations.

    DTIC Science & Technology

    1981-04-01

    LIFE CYCLE COST (LCC) LCC SENSITIVITY ANALYSIS LCC MODE , REPAIR LEVEL ANALYSIS (RLA) 20 ABSTRACT (Cnn tlnue on reverse side It necessary and Identify... level analysis capability. Next it provides values for Air Force input parameters and instructions for contractor inputs, general operating...Maintenance Manhour Requirements 39 5.1.4 Calculation of Repair Level Fractions 43 5.2 Cost Element Equations 47 5.2.1 Production Cost Element 47

  17. Investigation of hydrogen sulfide gas using Pd/Pt material based fiber Bragg grating sensor

    NASA Astrophysics Data System (ADS)

    Bedi, Amna; Rao, Dusari Nageswara; Kumar, Santosh

    2018-02-01

    In this work, Pd/Pt material based fiber Bragg grating (FBG) sensors has been proposed for detection of hydrogen sulfide gas. Here, characteristics of FBG parameters were numerically calculated and simulated. The variation in reflectivity based on refractive index has been shown. The reflectivity of FBG can be varied when refractive index is changed. The proposed sensor works on very low concentration i.e., 0% to 1%, which has the capability to detect in the early stage.

  18. Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes

    NASA Astrophysics Data System (ADS)

    Hirsch, Damian; Gharib, Morteza

    2016-11-01

    Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.

  19. Fully 3D modeling of tokamak vertical displacement events with realistic parameters

    NASA Astrophysics Data System (ADS)

    Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava

    2016-10-01

    In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.

  20. An approach for delineating drinking water wellhead protection areas at the Nile Delta, Egypt.

    PubMed

    Fadlelmawla, Amr A; Dawoud, Mohamed A

    2006-04-01

    In Egypt, production has a high priority. To this end protecting the quality of the groundwater, specifically when used for drinking water, and delineating protection areas around the drinking water wellheads for strict landuse restrictions is essential. The delineation methods are numerous; nonetheless, the uniqueness of the hydrogeological, institutional as well as social conditions in the Nile Delta region dictate a customized approach. The analysis of the hydrological conditions and land ownership at the Nile Delta indicates the need for an accurate methodology. On the other hand, attempting to calculate the wellhead protected areas around each of the drinking wells (more than 1500) requires data, human resources, and time that exceed the capabilities of the groundwater management agency. Accordingly, a combination of two methods (simplified variable shapes and numerical modeling) was adopted. Sensitivity analyses carried out using hypothetical modeling conditions have identified the pumping rate, clay thickness, hydraulic gradient, vertical conductivity of the clay, and the hydraulic conductivity as the most significant parameters in determining the dimensions of the wellhead protection areas (WHPAs). Tables of sets of WHPAs dimensions were calculated using synthetic modeling conditions representing the most common ranges of the significant parameters. Specific WHPA dimensions can be calculated by interpolation, utilizing the produced tables along with the operational and hydrogeological conditions for the well under consideration. In order to simplify the interpolation of the appropriate dimensions of the WHPAs from the calculated tables, an interactive computer program was written. The program accepts the real time data of the significant parameters as its input, and gives the appropriate WHPAs dimensions as its output.

  1. Rotordynamics on the PC: Further Capabilities of ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Rotordynamics codes for personal computers are now becoming available. One of the most capable codes is Analysis of RotorDynamic Systems (ARDS) which uses the component mode synthesis method to analyze a system of up to 5 rotating shafts. ARDS was originally written for a mainframe computer but has been successfully ported to a PC; its basic capabilities for steady-state and transient analysis were reported in an earlier paper. Additional functions have now been added to the PC version of ARDS. These functions include: 1) Estimation of the peak response following blade loss without resorting to a full transient analysis; 2) Calculation of response sensitivity to input parameters; 3) Formulation of optimum rotor and damper designs to place critical speeds in desirable ranges or minimize bearing loads; 4) Production of Poincard plots so the presence of chaotic motion can be ascertained. ARDS produces printed and plotted output. The executable code uses the full array sizes of the mainframe version and fits on a high density floppy disc. Examples of all program capabilities are presented and discussed.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less

  3. Free energy and phase equilibria for the restricted primitive model of ionic fluids from Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.

    1994-07-01

    In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.

  4. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  5. FIBER AND INTEGRATED OPTICS. OPTOELECTRONICS: Method for calculation of the parameters of guided waves in anisotropic dielectric waveguides

    NASA Astrophysics Data System (ADS)

    Goncharenko, I. A.

    1989-07-01

    The method of shift formulas is applied to anisotropic dielectric waveguides capable of conserving a given state of polarization of the transmitted signal. Equations are derived for calculation of the propagation constants and of the dispersion of the fundamental modes in waveguides with an anisotropic permittivity and a noncircular shape of the transverse cross section. Distributions of electric and magnetic fields of these modes are obtained in a transverse cross section of the waveguide. It is shown that under the influence of the anisotropy of the dielectric an energy spot describing the distribution of the mode field becomes of an ellipse with its axes oriented along the coordinates coinciding with the principal axes of the permittivity tensor.

  6. Impact of higher order diagrams on phase equilibrium calculations for small molecules using lattice cluster theory

    NASA Astrophysics Data System (ADS)

    Zimmermann, Patrick; Walowski, Christoph; Enders, Sabine

    2018-03-01

    The Lattice Cluster Theory (LCT) provides a powerful tool to predict thermodynamic properties of large molecules (e.g., polymers) of different molecular architectures. When the pure-component parameters of a certain compound have been derived by adjustment to experimental data and the number of atoms is held constant within the molecule so that only the architecture is changed, the LCT is capable of predicting the properties of isomers without further parameter adjustment just based on the incorporation of molecular architecture. Trying to predict the thermodynamic properties of smaller molecules, one might face some challenges, which are addressed in this contribution. After factoring out the mean field term of the partition function, the LCT poses an expression that involves corrections to the mean field depending on molecular architecture, resulting in the free energy formally being expressed as a double series expansion in lattice coordination number z and interaction energy ɛ ˜ . In the process of deriving all contributing sub-structures within a molecule, some parts have been neglected to this point due to the double series expansion being truncated after the order ɛ˜ 2z-2. We consider the neglected parts that are of the order z-3 and reformulate the expression for the free energy within the LCT to achieve a higher predictive capability of the theory when it comes to small isomers and compressible systems. The modified version was successfully applied for phase equilibrium calculations of binary mixtures composed of linear and branched alkanes.

  7. Numerical optimization of Ignition and Growth reactive flow modeling for PAX2A

    NASA Astrophysics Data System (ADS)

    Baker, E. L.; Schimel, B.; Grantham, W. J.

    1996-05-01

    Variable metric nonlinear optimization has been successfully applied to the parameterization of unreacted and reacted products thermodynamic equations of state and reactive flow modeling of the HMX based high explosive PAX2A. The NLQPEB nonlinear optimization program has been recently coupled to the LLNL developed two-dimensional high rate continuum modeling programs DYNA2D and CALE. The resulting program has the ability to optimize initial modeling parameters. This new optimization capability was used to optimally parameterize the Ignition and Growth reactive flow model to experimental manganin gauge records. The optimization varied the Ignition and Growth reaction rate model parameters in order to minimize the difference between the calculated pressure histories and the experimental pressure histories.

  8. Unsteady Full Annulus Simulations of a Transonic Axial Compressor Stage

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Hathaway, Michael D.; Chen, Jen-Ping

    2009-01-01

    Two recent research endeavors in turbomachinery at NASA Glenn Research Center have focused on compression system stall inception and compression system aerothermodynamic performance. Physical experiment and computational research are ongoing in support of these research objectives. TURBO, an unsteady, three-dimensional, Navier-Stokes computational fluid dynamics code commissioned and developed by NASA, has been utilized, enhanced, and validated in support of these endeavors. In the research which follows, TURBO is shown to accurately capture compression system flow range-from choke to stall inception-and also to accurately calculate fundamental aerothermodynamic performance parameters. Rigorous full-annulus calculations are performed to validate TURBO s ability to simulate the unstable, unsteady, chaotic stall inception process; as part of these efforts, full-annulus calculations are also performed at a condition approaching choke to further document TURBO s capabilities to compute aerothermodynamic performance data and support a NASA code assessment effort.

  9. Design of a new nozzle for direct current plasma guns with improved spraying parameters

    NASA Astrophysics Data System (ADS)

    Jankovic, M.; Mostaghimi, J.; Pershin, V.

    2000-03-01

    A new design is proposed for direct current plasma spray gas-shroud attachments. It has curvilinearly shaped internal walls aimed toward elimination of the cold air entrainment, recorded for commercially available conical designs of the shrouded nozzle. The curvilinear nozzle design was tested; it proved to be capable of withstanding high plasma temperatures and enabled satisfactory particle injection. Parallel measurements with an enthalpy probe were performed on the jet emerging from two different nozzles. Also, corresponding calculations were made to predict the plasma flow parameters and the particle parameters. Adequate spray tests were performed by spraying iron-aluminum and MCrAlY coatings onto stainless steel substrates. Coating analyses were performed, and coating qualities, such as microstructure, open porosity, and adhesion strength, were determined. The results indicate that the coatings sprayed with a curvilinear nozzle exhibited lower porosity, higher adhesion strength, and an enhanced microstructure.

  10. HITRAN Application Programming Interface (hapi): Extending HITRAN Capabilities

    NASA Astrophysics Data System (ADS)

    Kochanov, Roman V.; Gordon, Iouli E.; Rothman, Laurence S.; Wcislo, Piotr; Hill, Christian; Wilzewski, Jonas

    2016-06-01

    In this talk we present an update on the HITRAN Application Programming Interface (HAPI). HAPI is a free Python library providing a flexible set of tools to work with the most up-to-date spectroscopic data provided by HITRANonline (www.hitran.org) HAPI gives access to the spectroscopic parameters which are continuously being added to HITRANonline. For instance, these include non-Voigt profile parameters, foreign broadenings and shifts, and line mixing. HAPI enables more accurate spectra calculations for the spectroscopic and astrophysical applications requiring the detailed modeling of the broadener. HAPI implements an expert algorithm for the line profile selection for a single-layer radiative transfer calculation, and can be extended by custom line profiles and algorithms of their calculations, partition sums, instrumental functions, and temperature and pressure dependences. Possible HAPI applications include spectroscopic data validation and analysis as well as radiative-transfer calculations, experiment verification and spectroscopic code benchmarking. Kochanov RV, Gordon IE, et al. Submitted to JQSRT HighRus Special Issue 2016 Kochanov RV, Hill C, et al. ISMS 2015. http://hdl.handle.net/2142/79241 Hill C, Gordon IE, et al. Accepted to JQSRT HighRus Special Issue 2016. Wcislo P, Gordon IE, et al. Accepted to JQSRT HighRus Special Issue 2016. Wilzewski JS, Gordon IE, et al. JQSRT 2016;168:193-206. Kochanov RV, Gordon IE, et al. Clim Past 2015;11:1097-105.

  11. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE PAGES

    Ilas, Germina; Liljenfeldt, Henrik

    2017-05-19

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  12. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Liljenfeldt, Henrik

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  13. Evaluation of INL Supplied MOOSE/OSPREY Model: Modeling Water Adsorption on Type 3A Molecular Sieve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pompilio, L. M.; DePaoli, D. W.; Spencer, B. B.

    The purpose of this study was to evaluate Idaho National Lab’s Multiphysics Object-Oriented Simulation Environment (MOOSE) software in modeling the adsorption of water onto type 3A molecular sieve (3AMS). MOOSE can be thought-of as a computing framework within which applications modeling specific coupled-phenomena can be developed and run. The application titled Off-gas SeParation and REcoverY (OSPREY) has been developed to model gas sorption in packed columns. The sorbate breakthrough curve calculated by MOOSE/OSPREY was compared to results previously obtained in the deep bed hydration tests conducted at Oak Ridge National Laboratory. The coding framework permits selection of various options, whenmore » they exist, for modeling a process. For example, the OSPREY module includes options to model the adsorption equilibrium with a Langmuir model or a generalized statistical thermodynamic adsorption (GSTA) model. The vapor solid equilibria and the operating conditions of the process (e.g., gas phase concentration) are required to calculate the concentration gradient driving the mass transfer between phases. Both the Langmuir and GSTA models were tested in this evaluation. Input variables were either known from experimental conditions, or were available (e.g., density) or were estimated (e.g., thermal conductivity of sorbent) from the literature. Variables were considered independent of time, i.e., rather than having a mass transfer coefficient that varied with time or position in the bed, the parameter was set to remain constant. The calculated results did not coincide with data from laboratory tests. The model accurately estimated the number of bed volumes processed for the given operating parameters, but breakthrough times were not accurately predicted, varying 50% or more from the data. The shape of the breakthrough curves also differed from the experimental data, indicating a much wider sorption band. Model modifications are needed to improve its utility and predictive capability. Recommended improvements include: greater flexibility for input of mass transfer parameters, time-variable gas inlet concentration, direct output of loading and temperature profiles along the bed, and capability to conduct simulations of beds in series.« less

  14. Glaucoma Diagnostic Capabilities of Foveal Avascular Zone Parameters Using Optical Coherence Tomography Angiography According to Visual Field Defect Location.

    PubMed

    Kwon, Junki; Choi, Jaewan; Shin, Joong Won; Lee, Jiyun; Kook, Michael S

    2017-12-01

    To assess the diagnostic ability of foveal avascular zone (FAZ) parameters to discriminate glaucomatous eyes with visual field defects (VFDs) in different locations (central vs. peripheral) from normal eyes. Totally, 125 participants were separated into 3 groups: normal (n=45), glaucoma with peripheral VFD (PVFD, n=45), and glaucoma with central VFD (CVFD, n=35). The FAZ area, perimeter, and circularity and parafoveal vessel density were calculated from optical coherence tomography angiography images. The diagnostic ability of the FAZ parameters and other structural parameters was determined according to glaucomatous VFD location. Associations between the FAZ parameters and central visual function were evaluated. A larger FAZ area and longer FAZ perimeter were observed in the CVFD group than in the PVFD and normal groups. The FAZ area, perimeter, and circularity were better in differentiating glaucomatous eyes with CVFDs from normal eyes [areas under the receiver operating characteristic curves (AUC), 0.78 to 0.88] than in differentiating PVFDs from normal eyes (AUC, 0.51 to 0.64). The FAZ perimeter had a similar AUC value to the circumpapillary retinal nerve fiber layer and macular ganglion cell-inner plexiform layer thickness for differentiating eyes with CVFDs from normal eyes (all P>0.05, the DeLong test). The FAZ area was significantly correlated with central visual function (β=-112.7, P=0.035, multivariate linear regression). The FAZ perimeter had good diagnostic capability in differentiating glaucomatous eyes with CVFDs from normal eyes, and may be a potential diagnostic biomarker for detecting glaucomatous patients with CVFDs.

  15. Ab initio calculations of the lattice parameter and elastic stiffness coefficients of bcc Fe with solutes

    DOE PAGES

    Fellinger, Michael R.; Hector, Louis G.; Trinkle, Dallas R.

    2016-10-28

    Here, we present an efficient methodology for computing solute-induced changes in lattice parameters and elastic stiffness coefficients Cij of single crystals using density functional theory. We also introduce a solute strain misfit tensor that quantifies how solutes change lattice parameters due to the stress they induce in the host crystal. Solutes modify the elastic stiffness coefficients through volumetric changes and by altering chemical bonds. We compute each of these contributions to the elastic stiffness coefficients separately, and verify that their sum agrees with changes in the elastic stiffness coefficients computed directly using fully optimized supercells containing solutes. Computing the twomore » elastic stiffness contributions separately is more computationally efficient and provides more information on solute effects than the direct calculations. We compute the solute dependence of polycrystalline averaged shear and Young's moduli from the solute dependence of the single-crystal Cij. We then apply this methodology to substitutional Al, B, Cu, Mn, Si solutes and octahedral interstitial C and N solutes in bcc Fe. Comparison with experimental data indicates that our approach accurately predicts solute-induced changes in the lattice parameter and elastic coefficients. The computed data can be used to quantify solute-induced changes in mechanical properties such as strength and ductility, and can be incorporated into mesoscale models to improve their predictive capabilities.« less

  16. Spectrum orbit utilization program technical manual SOUP5 Version 3.8

    NASA Technical Reports Server (NTRS)

    Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.

    1984-01-01

    The underlying engineering and mathematical models as well as the computational methods used by the SOUP5 analysis programs, which are part of the R2BCSAT-83 Broadcast Satellite Computational System, are described. Included are the algorithms used to calculate the technical parameters and references to the relevant technical literature. The system provides the following capabilities: requirements file maintenance, data base maintenance, elliptical satellite beam fitting to service areas, plan synthesis from specified requirements, plan analysis, and report generation/query. Each of these functions are briefly described.

  17. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Uranium Metal, Oxide, and Solution Systems on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell

    In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.

  18. Thermophysical properties of liquid rare earth metals

    NASA Astrophysics Data System (ADS)

    Thakor, P. B.; Sonvane, Y. A.; Patel, H. P.; Jani, A. R.

    2013-06-01

    The thermodynamical properties like long wavelength limit S(0), iso-thermal compressibility (χT), thermal expansion coefficient (αV), thermal pressure coefficient (γV), specific heat at constant volume (CV) and specific heat at constant pressure (CP) are calculated for liquid rare earth metals. Our newly constructed parameter free model potential is used to describe the electron ion interaction due to Sarkar et al (S) local field correction function. Lastly, we conclude that our newly constructed model potential is capable to explain the thermophysical properties of liquid rare earth metals.

  19. Assessment of environments for Mars Science Laboratory entry, descent, and surface operations

    USGS Publications Warehouse

    Vasavada, Ashwin R.; Chen, Allen; Barnes, Jeffrey R.; Burkhart, P. Daniel; Cantor, Bruce A.; Dwyer-Cianciolo, Alicia M.; Fergason, Robini L.; Hinson, David P.; Justh, Hilary L.; Kass, David M.; Lewis, Stephen R.; Mischna, Michael A.; Murphy, James R.; Rafkin, Scot C.R.; Tyler, Daniel; Withers, Paul G.

    2012-01-01

    The Mars Science Laboratory mission aims to land a car-sized rover on Mars' surface and operate it for at least one Mars year in order to assess whether its field area was ever capable of supporting microbial life. Here we describe the approach used to identify, characterize, and assess environmental risks to the landing and rover surface operations. Novel entry, descent, and landing approaches will be used to accurately deliver the 900-kg rover, including the ability to sense and "fly out" deviations from a best-estimate atmospheric state. A joint engineering and science team developed methods to estimate the range of potential atmospheric states at the time of arrival and to quantitatively assess the spacecraft's performance and risk given its particular sensitivities to atmospheric conditions. Numerical models are used to calculate the atmospheric parameters, with observations used to define model cases, tune model parameters, and validate results. This joint program has resulted in a spacecraft capable of accessing, with minimal risk, the four finalist sites chosen for their scientific merit. The capability to operate the landed rover over the latitude range of candidate landing sites, and for all seasons, was verified against an analysis of surface environmental conditions described here. These results, from orbital and model data sets, also drive engineering simulations of the rover's thermal state that are used to plan surface operations.

  20. Improvements to Integrated Tradespace Analysis of Communications Architectures (ITACA) Network Loading Analysis Tool

    NASA Technical Reports Server (NTRS)

    Lee, Nathaniel; Welch, Bryan W.

    2018-01-01

    NASA's SCENIC project aims to simplify and reduce the cost of space mission planning by replicating the analysis capabilities of commercially licensed software which are integrated with relevant analysis parameters specific to SCaN assets and SCaN supported user missions. SCENIC differs from current tools that perform similar analyses in that it 1) does not require any licensing fees, 2) will provide an all-in-one package for various analysis capabilities that normally requires add-ons or multiple tools to complete. As part of SCENIC's capabilities, the ITACA network loading analysis tool will be responsible for assessing the loading on a given network architecture and generating a network service schedule. ITACA will allow users to evaluate the quality of service of a given network architecture and determine whether or not the architecture will satisfy the mission's requirements. ITACA is currently under development, and the following improvements were made during the fall of 2017: optimization of runtime, augmentation of network asset pre-service configuration time, augmentation of Brent's method of root finding, augmentation of network asset FOV restrictions, augmentation of mission lifetimes, and the integration of a SCaN link budget calculation tool. The improvements resulted in (a) 25% reduction in runtime, (b) more accurate contact window predictions when compared to STK(Registered Trademark) contact window predictions, and (c) increased fidelity through the use of specific SCaN asset parameters.

  1. Fast computation of high energy elastic collision scattering angle for electric propulsion plume simulation

    NASA Astrophysics Data System (ADS)

    Araki, Samuel J.

    2016-11-01

    In the plumes of Hall thrusters and ion thrusters, high energy ions experience elastic collisions with slow neutral atoms. These collisions involve a process of momentum exchange, altering the initial velocity vectors of the collision pair. In addition to the momentum exchange process, ions and atoms can exchange electrons, resulting in slow charge-exchange ions and fast atoms. In these simulations, it is particularly important to accurately perform computations of ion-atom elastic collisions in determining the plume current profile and assessing the integration of spacecraft components. The existing models are currently capable of accurate calculation but are not fast enough such that the calculation can be a bottleneck of plume simulations. This study investigates methods to accelerate an ion-atom elastic collision calculation that includes both momentum- and charge-exchange processes. The scattering angles are pre-computed through a classical approach with ab initio spin-orbit free potential and are stored in a two-dimensional array as functions of impact parameter and energy. When performing a collision calculation for an ion-atom pair, the scattering angle is computed by a table lookup and multiple linear interpolations, given the relative energy and randomly determined impact parameter. In order to further accelerate the calculations, the number of collision calculations is reduced by properly defining two cut-off cross-sections for the elastic scattering. In the MCC method, the target atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play a significant role in typical electric propulsion plume simulations such that the sampling process is unnecessary. With these implementations, the computational run-time to perform a collision calculation is reduced significantly compared to previous methods, while retaining the accuracy of the high fidelity models.

  2. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    NASA Astrophysics Data System (ADS)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  3. ZERODUR: deterministic approach for strength design

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.

  4. Using polarizable POSSIM force field and fuzzy-border continuum solvent model to calculate pK(a) shifts of protein residues.

    PubMed

    Sharma, Ity; Kaminski, George A

    2017-01-15

    Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    NASA Astrophysics Data System (ADS)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  6. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  7. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  8. SSME thrust chamber simulation using Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Singhal, A. K.; Tam, L. T.

    1984-01-01

    The capability of the PHOENICS fluid dynamics code in predicting two-dimensional, compressible, and reacting flow in the combustion chamber and nozzle of the space shuttle main engine (SSME) was evaluated. A non-orthogonal body fitted coordinate system was used to represent the nozzle geometry. The Navier-Stokes equations were solved for the entire nozzle with a turbulence model. The wall boundary conditions were calculated based on the wall functions which account for pressure gradients. Results of the demonstration test case reveal all expected features of the transonic nozzle flows. Of particular interest are the locations of normal and barrel shocks, and regions of highest temperature gradients. Calculated performance (global) parameters such as thrust chamber flow rate, thrust, and specific impulse are also in good agreement with available data.

  9. Linear and nonlinear spectroscopy from quantum master equations.

    PubMed

    Fetherolf, Jonathan H; Berkelbach, Timothy C

    2017-12-28

    We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.

  10. Linear and nonlinear spectroscopy from quantum master equations

    NASA Astrophysics Data System (ADS)

    Fetherolf, Jonathan H.; Berkelbach, Timothy C.

    2017-12-01

    We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.

  11. Experimental and theoretical study of substituent effect on 13C NMR chemical shifts of 5-arylidene-2,4-thiazolidinediones

    NASA Astrophysics Data System (ADS)

    Rančić, Milica P.; Trišović, Nemanja P.; Milčić, Miloš K.; Ajaj, Ismail A.; Marinković, Aleksandar D.

    2013-10-01

    The electronic structure of 5-arylidene-2,4-thiazolidinediones has been studied by using experimental and theoretical methodology. The theoretical calculations of the investigated 5-arylidene-2,4-thiazolidinediones have been performed by the use of quantum chemical methods. The calculated 13C NMR chemical shifts and NBO atomic charges provide an insight into the influence of such a structure on the transmission of electronic substituent effects. Linear free energy relationships (LFERs) have been further applied to their 13C NMR chemical shifts. The correlation analyses for the substituent-induced chemical shifts (SCS) have been performed with σ using SSP (single substituent parameter), field (σF) and resonance (σR) parameters using DSP (dual substituent parameter), as well as the Yukawa-Tsuno model. The presented correlations account satisfactorily for the polar and resonance substituent effects operative at Cβ, and C7 carbons, while reverse substituent effect was found for Cα. The comparison of correlation results for the investigated molecules with those obtained for seven structurally related styrene series has indicated that specific cross-interaction of phenyl substituent and groups attached at Cβ carbon causes increased sensitivity of SCS Cβ to the resonance effect with increasing of electron-accepting capabilities of the group present at Cβ.

  12. Hydraulic and separation characteristics of an industrial gas centrifuge calculated with neural networks

    NASA Astrophysics Data System (ADS)

    Butov, Vladimir; Timchenko, Sergey; Ushakov, Ivan; Golovkov, Nikita; Poberezhnikov, Andrey

    2018-03-01

    Single gas centrifuge (GC) is generally used for the separation of binary mixtures of isotopes. Processes taking place within the centrifuge are complex and non-linear. Their characteristics can change over time with long-term operation due to wear of the main structural elements of the GC construction. The paper is devoted to the determination of basic operation parameters of the centrifuge with the help of neural networks. We have developed a method for determining the parameters of the industrial GC operation by processing statistical data. In this work, we have constructed a neural network that is capable of determining the main hydraulic and separation characteristics of the gas centrifuge, depending on the geometric dimensions of the gas centrifuge, load value, and rotor speed.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ullmann, John Leonard; Couture, Aaron Joseph; Koehler, Paul E.

    An accurate knowledge of the neutron capture cross section is important for many applications. Experimental measurements are important since theoretical calculations of capture have been notoriously difficult, with the ratio of measured to calculated cross sections often a factor of 2 or more in the 10 keV to 1 MeV region. However, a direct measurement of capture cannot be made on many interesting radioactive nuclides because of their short half-life or backgrounds caused by their nuclear decay. On the other hand, neutron transmission measurements of the total cross section are feasible for a wide range of radioactive nuclides since themore » detectors are far from the sample, and often are less sensitive to decay radiation. The parameters extracted from a total cross section measurement, which include the average resonance spacing, the neutron strength function, and the average total radiation width, (Γ γ), provide tight constraints on the calculation of the capture cross section, and when applied produce much more accurate results. These measurements can be made using the intense epithermal neutron flux at the Lujan Center on relatively small quantities of target material. It was the purpose of this project to investigate and develop the capability to make these measurements. A great deal of progress was made towards establishing this capability during 2016, including setting up the flight path and obtaining preliminary results, but more work remains to be done.« less

  14. Comparison between the evaluation of bacterial regrowth capability in a turbidimeter and biodegradable dissolved organic carbon bioreactor measurements in water.

    PubMed

    Kott, Y; Ribas, F; Frías, J; Lucena, F

    1997-09-01

    In recent years, two different approaches to the study of biodegradable organic matter in distribution systems have been followed. The assimilable organic carbon (AOC) indicates the portion of the dissolved organic matter used by bacteria and converted to biomass, which is directly measured as total bacteria, active bacteria or colony-forming units and indirectly as ATP or increase in turbidity. In contrast, the biodegradable dissolved organic carbon (BDOC) is the portion of the dissolved organic carbon that can be mineralized by heterotrophic microorganisms, and it is measured as the difference between the inflow and the outflow of a bioreactor. In this study, at different steps in a water treatment plant, the bacterial regrowth capability was determined by the AOC method that measures the maximum growth rate by using a computerized Monitek turbidimeter. The BDOC was determined using a plug flow bioreactor. Measurements of colony-forming units and total organic carbon (TOC) evolution in a turbidimeter and of colony-forming units at the inflow/outflow of the bioreactor were also performed, calculating at all sampling points the coefficient yield (Y = cfu/delta TOC) in both systems. The correlations between the results from the bioreactor and turbidimeter have been calculated; a high correlation level was observed between BDOC values and all the other parameters, except for Y calculated from bacterial suspension measured in the turbidimeter.

  15. Dosimetric comparison of helical tomotherapy treatment plans for total marrow irradiation created using GPU and CPU dose calculation engines.

    PubMed

    Nalichowski, Adrian; Burmeister, Jay

    2013-07-01

    To compare optimization characteristics, plan quality, and treatment delivery efficiency between total marrow irradiation (TMI) plans using the new TomoTherapy graphic processing unit (GPU) based dose engine and CPU/cluster based dose engine. Five TMI plans created on an anthropomorphic phantom were optimized and calculated with both dose engines. The planning treatment volume (PTV) included all the bones from head to mid femur except for upper extremities. Evaluated organs at risk (OAR) consisted of lung, liver, heart, kidneys, and brain. The following treatment parameters were used to generate the TMI plans: field widths of 2.5 and 5 cm, modulation factors of 2 and 2.5, and pitch of either 0.287 or 0.43. The optimization parameters were chosen based on the PTV and OAR priorities and the plans were optimized with a fixed number of iterations. The PTV constraint was selected to ensure that at least 95% of the PTV received the prescription dose. The plans were evaluated based on D80 and D50 (dose to 80% and 50% of the OAR volume, respectively) and hotspot volumes within the PTVs. Gamma indices (Γ) were also used to compare planar dose distributions between the two modalities. The optimization and dose calculation times were compared between the two systems. The treatment delivery times were also evaluated. The results showed very good dosimetric agreement between the GPU and CPU calculated plans for any of the evaluated planning parameters indicating that both systems converge on nearly identical plans. All D80 and D50 parameters varied by less than 3% of the prescription dose with an average difference of 0.8%. A gamma analysis Γ(3%, 3 mm) < 1 of the GPU plan resulted in over 90% of calculated voxels satisfying Γ < 1 criterion as compared to baseline CPU plan. The average number of voxels meeting the Γ < 1 criterion for all the plans was 97%. In terms of dose optimization/calculation efficiency, there was a 20-fold reduction in planning time with the new GPU system. The average optimization/dose calculation time utilizing the traditional CPU/cluster based system was 579 vs 26.8 min for the GPU based system. There was no difference in the calculated treatment delivery time per fraction. Beam-on time varied based on field width and pitch and ranged between 15 and 28 min. The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation.

  16. Diagnostic Capability of Spectral Domain Optical Coherence Tomography for Glaucoma

    PubMed Central

    Wu, Huijuan; de Boer, Johannes F.; Chen, Teresa C.

    2012-01-01

    Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (OCT) in glaucoma patients with visual field (VF) defects. Design Prospective, cross-sectional study. Methods Setting Participants were recruited from a university hospital clinic. Study Population One eye of 85 normal subjects and 61 glaucoma patients [with average VF mean deviation (MD) of -9.61 ± 8.76 dB] were randomly selected for the study. A subgroup of the glaucoma patients with early VF defects was calculated separately. Observation Procedures Spectralis OCT circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. Main Outcome Measures To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve (AROC), sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Results Overall RNFL thickness had the highest AROC value (0.952 for all patients, 0.895 for the early glaucoma subgroup). For all patients, the highest sensitivity (98.4%, CI 96.3-100%) was achieved by using two criteria: ≥1 RNFL sectors being abnormal at the < 5% level, and overall classification of borderline or outside normal limits, with specificities of 88.9% (CI 84.0-94.0%) and 87.1% (CI 81.6-92.5%) respectively for these two criteria. Conclusions Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral domain OCT were good for early perimetric glaucoma and excellent for moderately-advanced perimetric glaucoma. PMID:22265147

  17. Clinical applications of advanced rotational radiation therapy

    NASA Astrophysics Data System (ADS)

    Nalichowski, Adrian

    Purpose: With a fast adoption of emerging technologies, it is critical to fully test and understand its limits and capabilities. In this work we investigate new graphic processing unit (GPU) based treatment planning algorithm and its applications in helical tomotherapy dose delivery. We explore the limits of the system by applying it to challenging clinical cases of total marrow irradiation (TMI) and stereotactic radiosurgery (SRS). We also analyze the feasibility of alternative fractionation schemes for total body irradiation (TBI) and TMI based on reported historical data on lung dose and interstitial pneumonitis (IP) incidence rates. Methods and Materials: An anthropomorphic phantom was used to create TMI plans using the new GPU based treatment planning system and the existing CPU cluster based system. Optimization parameters were selected based on clinically used values for field width, modulation factor and pitch. Treatment plans were also created on Eclipse treatment planning system (Varian Medical Systems Inc, Palo Alto, CA) using volumetric modulated arc therapy (VMAT) for dose delivery on IX treatment unit. A retrospective review was performed of 42 publications that reported IP rates along with lung dose, fractionation regimen, dose rate and chemotherapy. The analysis consisted of nearly thirty two hundred patients and 34 unique radiation regimens. Multivariate logistic regression was performed to determine parameters associated with IP and establish does response function. Results: The results showed very good dosimetric agreement between the GPU and CPU calculated plans. The results from SBRT study show that GPU planning system can maintain 90% target coverage while meeting all the constraints of RTOG 0631 protocol. Beam on time for Tomotherapy and flattening filter free RapidArc was much faster than for Vero or Cyberknife. Retrospective data analysis showed that lung dose and Cyclophosphomide (Cy) are both predictors of IP in TBI/TMI treatments. The dose rate was not found to be an independent risk factor for IP. The model failed to establish accurate dose response function, but the discrete data indicated a radiation dose threshold of 7.6Gy (EQD2_repair) and 120 mg/kg of Cy below which no IP cases were reported. Conclusion: The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation. The new system was able to achieve more uniform dose distribution throughout the target volume and steeper dose fall off, resulting in superior OAR sparing when compared to Eclipse treatment planning system for VMAT delivery. The machine optimization parameters tested for TMI cases provide a comprehensive overview of the capabilities of the treatment planning station and associated helical delivery system. The new system also proved to be dosimetrically compatible with other leading modalities for treatments of small and complicated target volumes and was even superior when treatment delivery times were compared. These finding demonstrate that the advanced treatment planning and delivery system from TomoTherapy is well suitable for treatments of complicated cases such as TMI and SRS and it's often dosimetrically and/or logistically superior to other modalities. The new planning system can easily meet the constraint of threshold lung dose established in this study. The results presented here on the capabilities of Tomotherapy and on the identified lung dose threshold provide an opportunity to explore alternative fractionation schemes without sacrificing target coverage or lung toxicity. (Abstract shortened by ProQuest.).

  18. Capabilities of Fully Parallelized MHD Stability Code MARS

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2016-10-01

    Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.

  19. A medical image-based graphical platform -- features, applications and relevance for brachytherapy.

    PubMed

    Fonseca, Gabriel P; Reniers, Brigitte; Landry, Guillaume; White, Shane; Bellezzo, Murillo; Antunes, Paula C G; de Sales, Camila P; Welteman, Eduardo; Yoriyaz, Hélio; Verhaegen, Frank

    2014-01-01

    Brachytherapy dose calculation is commonly performed using the Task Group-No 43 Report-Updated protocol (TG-43U1) formalism. Recently, a more accurate approach has been proposed that can handle tissue composition, tissue density, body shape, applicator geometry, and dose reporting either in media or water. Some model-based dose calculation algorithms are based on Monte Carlo (MC) simulations. This work presents a software platform capable of processing medical images and treatment plans, and preparing the required input data for MC simulations. The A Medical Image-based Graphical platfOrm-Brachytherapy module (AMIGOBrachy) is a user interface, coupled to the MCNP6 MC code, for absorbed dose calculations. The AMIGOBrachy was first validated in water for a high-dose-rate (192)Ir source. Next, dose distributions were validated in uniform phantoms consisting of different materials. Finally, dose distributions were obtained in patient geometries. Results were compared against a treatment planning system including a linear Boltzmann transport equation (LBTE) solver capable of handling nonwater heterogeneities. The TG-43U1 source parameters are in good agreement with literature with more than 90% of anisotropy values within 1%. No significant dependence on the tissue composition was observed comparing MC results against an LBTE solver. Clinical cases showed differences up to 25%, when comparing MC results against TG-43U1. About 92% of the voxels exhibited dose differences lower than 2% when comparing MC results against an LBTE solver. The AMIGOBrachy can improve the accuracy of the TG-43U1 dose calculation by using a more accurate MC dose calculation algorithm. The AMIGOBrachy can be incorporated in clinical practice via a user-friendly graphical interface. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  20. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  1. Software for determining the true displacement of faults

    NASA Astrophysics Data System (ADS)

    Nieto-Fuentes, R.; Nieto-Samaniego, Á. F.; Xu, S.-S.; Alaniz-Álvarez, S. A.

    2014-03-01

    One of the most important parameters of faults is the true (or net) displacement, which is measured by restoring two originally adjacent points, called “piercing points”, to their original positions. This measurement is not typically applicable because it is rare to observe piercing points in natural outcrops. Much more common is the measurement of the apparent displacement of a marker. Methods to calculate the true displacement of faults using descriptive geometry, trigonometry or vector algebra are common in the literature, and most of them solve a specific situation from a large amount of possible combinations of the fault parameters. True displacements are not routinely calculated because it is a tedious and tiring task, despite their importance and the relatively simple methodology. We believe that the solution is to develop software capable of performing this work. In a previous publication, our research group proposed a method to calculate the true displacement of faults by solving most combinations of fault parameters using simple trigonometric equations. The purpose of this contribution is to present a computer program for calculating the true displacement of faults. The input data are the dip of the fault; the pitch angles of the markers, slickenlines and observation lines; and the marker separation. To prevent the common difficulties involved in switching between operative systems, the software is developed using the Java programing language. The computer program could be used as a tool in education and will also be useful for the calculation of the true fault displacement in geological and engineering works. The application resolves the cases with known direction of net slip, which commonly is assumed parallel to the slickenlines. This assumption is not always valid and must be used with caution, because the slickenlines are formed during a step of the incremental displacement on the fault surface, whereas the net slip is related to the finite slip.

  2. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  3. Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danon, Yaron; Nazarewicz, Witold; Talou, Patrick

    2013-02-18

    This project addresses three important gaps in existing evaluated nuclear data libraries that represent a significant hindrance against highly advanced modeling and simulation capabilities for the Advanced Fuel Cycle Initiative (AFCI). This project will: Develop advanced theoretical tools to compute prompt fission neutrons and gamma-ray characteristics well beyond average spectra and multiplicity, and produce new evaluated files of U and Pu isotopes, along with some minor actinides; Perform state-of-the-art fission cross-section modeling and calculations using global and microscopic model input parameters, leading to truly predictive fission cross-sections capabilities. Consistent calculations for a suite of Pu isotopes will be performed; Implementmore » innovative data assimilation tools, which will reflect the nuclear data evaluation process much more accurately, and lead to a new generation of uncertainty quantification files. New covariance matrices will be obtained for Pu isotopes and compared to existing ones. The deployment of a fleet of safe and efficient advanced reactors that minimize radiotoxic waste and are proliferation-resistant is a clear and ambitious goal of AFCI. While in the past the design, construction and operation of a reactor were supported through empirical trials, this new phase in nuclear energy production is expected to rely heavily on advanced modeling and simulation capabilities. To be truly successful, a program for advanced simulations of innovative reactors will have to develop advanced multi-physics capabilities, to be run on massively parallel super- computers, and to incorporate adequate and precise underlying physics. And all these areas have to be developed simultaneously to achieve those ambitious goals. Of particular interest are reliable fission cross-section uncertainty estimates (including important correlations) and evaluations of prompt fission neutrons and gamma-ray spectra and uncertainties.« less

  4. Modules based on the geochemical model PHREEQC for use in scripting and programming languages

    USGS Publications Warehouse

    Charlton, Scott R.; Parkhurst, David L.

    2011-01-01

    The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server—for example, Excel®, Visual Basic®, Python, or MATLAB". PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations.

  5. Modules based on the geochemical model PHREEQC for use in scripting and programming languages

    USGS Publications Warehouse

    Charlton, S.R.; Parkhurst, D.L.

    2011-01-01

    The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server-for example, Excel??, Visual Basic??, Python, or MATLAB??. PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations. ?? 2011.

  6. Enhancing Access to Drought Information Using the CUAHSI Hydrologic Information System

    NASA Astrophysics Data System (ADS)

    Schreuders, K. A.; Tarboton, D. G.; Horsburgh, J. S.; Sen Gupta, A.; Reeder, S.

    2011-12-01

    The National Drought Information System (NIDIS) Upper Colorado River Basin pilot study is investigating and establishing capabilities for better dissemination of drought information for early warning and management. As part of this study we are using and extending functionality from the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) to provide better access to drought-related data in the Upper Colorado River Basin. The CUAHSI HIS is a federated system for sharing hydrologic data. It is comprised of multiple data servers, referred to as HydroServers, that publish data in a standard XML format called Water Markup Language (WaterML), using web services referred to as WaterOneFlow web services. HydroServers can also publish geospatial data using Open Geospatial Consortium (OGC) web map, feature and coverage services and are capable of hosting web and map applications that combine geospatial datasets with observational data served via web services. HIS also includes a centralized metadata catalog that indexes data from registered HydroServers and a data access client referred to as HydroDesktop. For NIDIS, we have established a HydroServer to publish drought index values as well as the input data used in drought index calculations. Primary input data required for drought index calculation include streamflow, precipitation, reservoir storages, snow water equivalent, and soil moisture. We have developed procedures to redistribute the input data to the time and space scales chosen for drought index calculation, namely half monthly time intervals for HUC 10 subwatersheds. The spatial redistribution approaches used for each input parameter are dependent on the spatial linkages for that parameter, i.e., the redistribution procedure for streamflow is dependent on the upstream/downstream connectivity of the stream network, and the precipitation redistribution procedure is dependent on elevation to account for orographic effects. A set of drought indices are then calculated from the redistributed data. We have created automated data and metadata harvesters that periodically scan and harvest new data from each of the input databases, and calculates extensions to the resulting derived data sets, ensuring that the data available on the drought server is kept up to date. This paper will describe this system, showing how it facilitates the integration of data from multiple sources to inform the planning and management of water resources during drought. The system may be accessed at http://drought.usu.edu.

  7. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  8. Overview of computational control research at UT Austin

    NASA Technical Reports Server (NTRS)

    Bong, Wie

    1989-01-01

    An overview of current research activities at UT Austin is presented to discuss certain technical issues in the following areas: (1) Computer-Aided Nonlinear Control Design: In this project, the describing function method is employed for the nonlinear control analysis and design of a flexible spacecraft equipped with pulse modulated reaction jets. INCA program has been enhanced to allow the numerical calculation of describing functions as well as the nonlinear limit cycle analysis capability in the frequency domain; (2) Robust Linear Quadratic Gaussian (LQG) Compensator Synthesis: Robust control design techniques and software tools are developed for flexible space structures with parameter uncertainty. In particular, an interactive, robust multivariable control design capability is being developed for INCA program; and (3) LQR-Based Autonomous Control System for the Space Station: In this project, real time implementation of LQR-based autonomous control system is investigated for the space station with time-varying inertias and with significant multibody dynamic interactions.

  9. Simple electrical model and initial experiments for intra-body communications.

    PubMed

    Gao, Y M; Pun, S H; Du, M; Mak, P U; Vai, M I

    2009-01-01

    Intra-Body Communication(IBC) is a short range "wireless" communication technique appeared in recent years. This technique relies on the conductive property of human tissue to transmit the electric signal among human body. This is beneficial for devices networking and sensors among human body, and especially suitable for wearable sensors, telemedicine system and home health care system as in general the data rates of physiologic parameters are low. In this article, galvanic coupling type IBC application on human limb was investigated in both its mathematical model and related experiments. The experimental results showed that the proposed mathematical model was capable in describing the galvanic coupling type IBC under low frequency. Additionally, the calculated result and experimental result also indicated that the electric signal induced by the transmitters of IBC can penetrate deep into human muscle and thus, provide an evident that IBC is capable of acting as networking technique for implantable devices.

  10. Analysis of the dynamics of movement of the landing vehicle with an inflatable braking device on the final trajectory under the influence of wind load

    NASA Astrophysics Data System (ADS)

    Koryanov, V.; Kazakovtsev, V.; Harri, A.-M.; Heilimo, J.; Haukka, H.; Aleksashkin, S.

    2015-10-01

    This research work is devoted to analysis of angular motion of the landing vehicle (LV) with an inflatable braking device (IBD), taking into account the influence of the wind load on the final stage of the movement. Using methods to perform a calculation of parameters of angular motion of the landing vehicle with an inflatable braking device based on the availability of small asymmetries, which are capable of complex dynamic phenomena, analyzes motion of the landing vehicle at the final stage of motion in the atmosphere.

  11. Diagnostic Capability of Peripapillary Three-dimensional Retinal Nerve Fiber Layer Volume for Glaucoma Using Optical Coherence Tomography Volume Scans.

    PubMed

    Khoueir, Ziad; Jassim, Firas; Poon, Linda Yi-Chieh; Tsikata, Edem; Ben-David, Geulah S; Liu, Yingna; Shieh, Eric; Lee, Ramon; Guo, Rong; Papadogeorgou, Georgia; Braaf, Boy; Simavli, Huseyin; Que, Christian; Vakoc, Benjamin J; Bouma, Brett E; de Boer, Johannes F; Chen, Teresa C

    2017-10-01

    To determine the diagnostic capability of peripapillary 3-dimensional (3D) retinal nerve fiber layer (RNFL) volume measurements from spectral-domain optical coherence tomography (OCT) volume scans for open-angle glaucoma (OAG). Assessment of diagnostic accuracy. Setting: Academic clinical setting. Total of 180 patients (113 OAG and 67 normal subjects). One eye per subject was included. Peripapillary 3D RNFL volumes were calculated for global, quadrant, and sector regions, using 4 different-size annuli. Peripapillary 2D RNFL thickness circle scans were also obtained. Area under the receiver operating characteristic curve (AUROC) values, sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios. Among all 2D and 3D RNFL parameters, best diagnostic capability was associated with inferior quadrant 3D RNFL volume of the smallest annulus (AUROC value 0.977). Otherwise, global 3D RNFL volume AUROC values were comparable to global 2D RNFL thickness AUROC values for all 4 annulus sizes (P values: .0593 to .6866). When comparing the 4 annulus sizes for global RNFL volume, the smallest annulus had the best AUROC values (P values: .0317 to .0380). The smallest-size annulus may have the best diagnostic potential, partly owing to having no areas excluded for being larger than the 6 × 6 mm 2 scanned region. Peripapillary 3D RNFL volume showed excellent diagnostic performance for detecting glaucoma. Peripapillary 3D RNFL volume parameters have the same or better diagnostic capability compared to peripapillary 2D RNFL thickness measurements, although differences were not statistically significant. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Detection capability of the IMS seismic network based on ambient seismic noise measurements

    NASA Astrophysics Data System (ADS)

    Gaebler, Peter J.; Ceranna, Lars

    2016-04-01

    All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection threshold can be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.

  13. An accelerator-based Boron Neutron Capture Therapy (BNCT) facility based on the 7Li(p,n)7Be

    NASA Astrophysics Data System (ADS)

    Musacchio González, Elizabeth; Martín Hernández, Guido

    2017-09-01

    BNCT (Boron Neutron Capture Therapy) is a therapeutic modality used to irradiate tumors cells previously loaded with the stable isotope 10B, with thermal or epithermal neutrons. This technique is capable of delivering a high dose to the tumor cells while the healthy surrounding tissue receive a much lower dose depending on the 10B biodistribution. In this study, therapeutic gain and tumor dose per target power, as parameters to evaluate the treatment quality, were calculated. The common neutron-producing reaction 7Li(p,n)7Be for accelerator-based BNCT, having a reaction threshold of 1880.4 keV, was considered as the primary source of neutrons. Energies near the reaction threshold for deep-seated brain tumors were employed. These calculations were performed with the Monte Carlo N-Particle (MCNP) code. A simple but effective beam shaping assembly (BSA) was calculated producing a high therapeutic gain compared to previously proposed facilities with the same nuclear reaction.

  14. A mesoscopic simulation of static and dynamic wetting using many-body dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Ghorbani, Najmeh; Pishevar, Ahmadreza

    2018-01-01

    A many-body dissipative particle dynamics simulation is applied here to pave the way for investigating the behavior of mesoscale droplets after impact on horizontal solid substrates. First, hydrophobic and hydrophilic substrates are simulated through tuning the solid-liquid interfacial interaction parameters of an innovative conservative force model. The static contact angles are calculated on homogeneous and several patterned surfaces and compared with the predicted values by the Cassie's law in order to verify the model. The results properly evaluate the amount of increase in surface superhydrophobicity as a result of surface patterning. Then drop impact phenomenon is studied by calculating the spreading factor and dimensionless height versus dimensionless time and the comparisons made between the results and the experimental values for three different static contact angles. The results show the capability of the procedure in calculating the amount of maximum spreading factor, which is a significant concept in ink-jet printing and coating process.

  15. Neutronics calculation of RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Zin, Muhammad Rawi B. Mohamed; Karim, Julia Bt. Abdul; Bayar, Abi Muttaqin B. Jalal; Usang, Mark Dennis Anak; Mustafa, Muhammad Khairul Ariff B.; Hamzah, Na'im Syauqi B.; Said, Norfarizan Bt. Mohd; Jalil, Muhammad Husamuddin B.

    2017-01-01

    Reactor calculation and simulation are significantly important to ensure safety and better utilization of a research reactor. The Malaysian's PUSPATI TRIGA Reactor (RTP) achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. Since early 90s, neutronics modelling were used as part of its routine in-core fuel management activities. The are several computer codes have been used in RTP since then, based on 1D neutron diffusion, 2D neutron diffusion and 3D Monte Carlo neutron transport method. This paper describes current progress and overview on neutronics modelling development in RTP. Several important parameters were analysed such as keff, reactivity, neutron flux, power distribution and fission product build-up for the latest core configuration. The developed core neutronics model was validated by means of comparison with experimental and measurement data. Along with the RTP core model, the calculation procedure also developed to establish better prediction capability of RTP's behaviour.

  16. A partition function-based weighting scheme in force field parameter development using ab initio calculation results in global configurational space.

    PubMed

    Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng

    2013-06-05

    In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.

  17. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  18. Theoretical prediction of crystallization kinetics of a supercooled Lennard-Jones fluid

    NASA Astrophysics Data System (ADS)

    Gunawardana, K. G. S. H.; Song, Xueyu

    2018-05-01

    The first order curvature correction to the crystal-liquid interfacial free energy is calculated using a theoretical model based on the interfacial excess thermodynamic properties. The correction parameter (δ), which is analogous to the Tolman length at a liquid-vapor interface, is found to be 0.48 ± 0.05 for a Lennard-Jones (LJ) fluid. We show that this curvature correction is crucial in predicting the nucleation barrier when the size of the crystal nucleus is small. The thermodynamic driving force (Δμ) corresponding to available simulated nucleation conditions is also calculated by combining the simulated data with a classical density functional theory. In this paper, we show that the classical nucleation theory is capable of predicting the nucleation barrier with excellent agreement to the simulated results when the curvature correction to the interfacial free energy is accounted for.

  19. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock.

    PubMed

    Tamayo-Mendoza, Teresa; Kreisbeck, Christoph; Lindh, Roland; Aspuru-Guzik, Alán

    2018-05-23

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult , a Hartree-Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  20. AVR Microcontroller-based automated technique for analysis of DC motors

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Chatterji, S.

    2014-01-01

    This paper provides essential information on the development of a 'dc motor test and analysis control card' using AVR series ATMega32 microcontroller. This card can be interfaced to PC and calculates parameters like motor losses, efficiency and plot characteristics for dc motors. Presently, there are different tests and methods available to evaluate motor parameters, but a single and universal user-friendly automated set-up has been discussed in this paper. It has been accomplished by designing a data acquisition and SCR bridge firing hardware based on AVR ATMega32 microcontroller. This hardware has the capability to drive the phase-controlled rectifiers and acquire real-time values of current, voltage, temperature and speed of motor. Various analyses feasible with the designed hardware are of immense importance for dc motor manufacturers and quality-sensitive users. Authors, through this paper aim to provide details of this AVR-based hardware which can be used for dc motor parameter analysis and also for motor control applications.

  1. System IDentification Programs for AirCraft (SIDPAC)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2002-01-01

    A collection of computer programs for aircraft system identification is described and demonstrated. The programs, collectively called System IDentification Programs for AirCraft, or SIDPAC, were developed in MATLAB as m-file functions. SIDPAC has been used successfully at NASA Langley Research Center with data from many different flight test programs and wind tunnel experiments. SIDPAC includes routines for experiment design, data conditioning, data compatibility analysis, model structure determination, equation-error and output-error parameter estimation in both the time and frequency domains, real-time and recursive parameter estimation, low order equivalent system identification, estimated parameter error calculation, linear and nonlinear simulation, plotting, and 3-D visualization. An overview of SIDPAC capabilities is provided, along with a demonstration of the use of SIDPAC with real flight test data from the NASA Glenn Twin Otter aircraft. The SIDPAC software is available without charge to U.S. citizens by request to the author, contingent on the requestor completing a NASA software usage agreement.

  2. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree–Fock

    PubMed Central

    2018-01-01

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult, a Hartree–Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  3. Introductory study of the chemical behavior of jet emissions in photochemical smog. [computerized simulation

    NASA Technical Reports Server (NTRS)

    Whitten, G. Z.; Hogo, H.

    1976-01-01

    Jet aircraft emissions data from the literature were used as initial conditions for a series of computer simulations of photochemical smog formation in static air. The chemical kinetics mechanism used in these simulations was an updated version which contains certain parameters designed to account for hydrocarbon reactivity. These parameters were varied to simulate the reaction rate constants and average carbon numbers associated with the jet emissions. The roles of surface effects, variable light sources, NO/NO2 ratio, continuous emissions, and untested mechanistic parameters were also assessed. The results of these calculations indicate that the present jet emissions are capable of producing oxidant by themselves. The hydrocarbon/nitrous oxides ratio of present jet aircraft emissions is much higher than that of automobiles. These two ratios appear to bracket the hydrocarbon/nitrous oxides ratio that maximizes ozone production. Hence an enhanced effect is seen in the simulation when jet exhaust emissions are mixed with automobile emissions.

  4. Combined Molecular Dynamics Simulation-Molecular-Thermodynamic Theory Framework for Predicting Surface Tensions.

    PubMed

    Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben

    2017-08-22

    A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.

  5. THE HYPERFINE STRUCTURE OF THE ROTATIONAL SPECTRUM OF HDO AND ITS EXTENSION TO THE THz REGION: ACCURATE REST FREQUENCIES AND SPECTROSCOPIC PARAMETERS FOR ASTROPHYSICAL OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cazzoli, Gabriele; Lattanzi, Valerio; Puzzarini, Cristina

    2015-06-10

    The rotational spectrum of the mono-deuterated isotopologue of water, HD{sup 16}O, has been investigated in the millimeter- and submillimeter-wave frequency regions, up to 1.6 THz. The Lamb-dip technique has been exploited to obtain sub-Doppler resolution and to resolve the hyperfine (hf) structure due to the deuterium and hydrogen nuclei, thus enabling the accurate determination of the corresponding hf parameters. Their experimental determination has been supported by high-level quantum-chemical calculations. The Lamb-dip measurements have been supplemented by Doppler-limited measurements (weak high-J and high-frequency transitions) in order to extend the predictive capability of the available spectroscopic constants. The possibility of resolving hfmore » splittings in astronomical spectra has been discussed.« less

  6. Simulation on turning aspheric surface method via oscillating feed

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Li, Zengqiang; Sun, Tao

    2014-08-01

    It is quite difficult to manufacturing optical components, the combination of high gradient ellipsoid and hyperboloid, with high machining surface requirements. To solve the problem, in this paper we present a turning and forming method via oscillating feed of R-θ layout lathe, analyze machining ellipsoid segment and hyperboloid segment separately through oscillating feed. Also calculate parameters on each trajectory during processing respectively and obtain displacement, velocity, acceleration and other parameters. The simulation result shows that this rotary turning method is capable of ensuring that the cutter is on the equidistance line of meridian cross section curve of work piece during processing high gradient aspheric surface, which helps getting high quality surface. Also the method provides a new approach and a theory basis for manufacturing high quality aspheric surface and extending function of the available twin-spindle lathe as well.

  7. Plasma MRI Experiments at UW-Madison

    NASA Astrophysics Data System (ADS)

    Flanagan, K.; Clark, M.; Desangles, V.; Siller, R.; Wallace, J.; Weisberg, D.; Forest, C. B.

    2015-11-01

    Experiments for driving Keplerian-like flow profiles on both the Plasma Couette Experiment Upgrade (PCX-U) and the Wisconsin Plasma Astrophysics Laboratory (WiPAL) user facility are described. Instead of driving flow at the boundaries, as is typical in many liquid metal Couette experiments, a global drive is implemented. A large radial current is drawn across a small axial field generating torque across the whole profile. This global electrically driven flow is capable of producing profiles similar to Keplerian flow. PCX-U has been purposely constructed for MRI experiments, while similar experiments on the WiPAL device show the versatility of the user facility and provide a larger plasma volume. Numerical calculations show the predicted parameter spaces for exciting the MRI in these plasmas and the equilibrium flow profiles expected. In both devices, relevant MRI parameters appear to be within reach of typical operating characteristics.

  8. Real gas flow parameters for NASA Langley 22-inch Mach 20 helium tunnel

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1992-01-01

    A computational procedure was developed which can be used to determine the flow properties in hypersonic helium wind tunnels in which real gas behavior is significant. In this procedure, a three-coefficient virial equation of state and the assumption of isentropic nozzle flow are employed to determine the tunnel reservoir, nozzle, throat, freestream, and post-normal shock conditions. This method was applied to a range of conditions which encompasses the operational capabilities of the LaRC 22-Inch Mach 20 Helium Tunnel. Results are presented graphically in the form of real gas correction factors which can be applied to perfect gas calculations. Important thermodynamic properties of helium are also plotted versus pressure and temperature. The computational scheme used to determine the real-helium flow parameters was incorporated into a FORTRAN code which is discussed.

  9. Lattice parameters and electronic structure of BeMgZnO quaternary solid solutions: Experiment and theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toporkov, M.; Avrutin, V.; Morkoç, H.

    2016-03-07

    Be{sub x}Mg{sub y}Zn{sub 1−x−y}O semiconductor solid solutions are attractive for UV optoelectronics and electronic devices owing to their wide bandgap and capability of lattice-matching to ZnO. In this work, a combined experimental and theoretical study of lattice parameters, bandgaps, and underlying electronic properties, such as changes in band edge wavefunctions in Be{sub x}Mg{sub y}Zn{sub 1−x−y}O thin films, is carried out. Theoretical ab initio calculations predicting structural and electronic properties for the whole compositional range of materials are compared with experimental measurements from samples grown by plasma assisted molecular beam epitaxy on (0001) sapphire substrates. The measured a and c latticemore » parameters for the quaternary alloys Be{sub x}Mg{sub y}Zn{sub 1−x} with x = 0−0.19 and y = 0–0.52 are within 1%–2% of those calculated using generalized gradient approximation to the density functional theory. Additionally, composition independent ternary BeZnO and MgZnO bowing parameters were determined for a and c lattice parameters and the bandgap. The electronic properties were calculated using exchange tuned Heyd-Scuseria-Ernzerhof hybrid functional. The measured optical bandgaps of the quaternary alloys are in good agreement with those predicted by the theory. Strong localization of band edge wavefunctions near oxygen atoms for BeMgZnO alloy in comparison to the bulk ZnO is consistent with large Be-related bandgap bowing of BeZnO and BeMgZnO (6.94 eV). The results in aggregate show that precise control over lattice parameters by tuning the quaternary composition would allow strain control in Be{sub x}Mg{sub y}Zn{sub 1−x−y}O/ZnO heterostructures with possibility to achieve both compressive and tensile strain, where the latter supports formation of two-dimensional electron gas at the interface.« less

  10. Investigation of Maximum Blade Loading Capability of Lift-Offset Rotors

    NASA Technical Reports Server (NTRS)

    Yeo, Hyeonsoo; Johnson, Wayne

    2013-01-01

    Maximum blade loading capability of a coaxial, lift-offset rotor is investigated using a rotorcraft configuration designed in the context of short-haul, medium-size civil and military missions. The aircraft was sized for a 6600-lb payload and a range of 300 nm. The rotor planform and twist were optimized for hover and cruise performance. For the present rotor performance calculations, the collective pitch angle is progressively increased up to and through stall with the shaft angle set to zero. The effects of lift offset on rotor lift, power, controls, and blade airloads and structural loads are examined. The maximum lift capability of the coaxial rotor increases as lift offset increases and extends well beyond the McHugh lift boundary as the lift potential of the advancing blades are fully realized. A parametric study is conducted to examine the differences between the present coaxial rotor and the McHugh rotor in terms of maximum lift capabilities and to identify important design parameters that define the maximum lift capability of the rotor. The effects of lift offset on rotor blade airloads and structural loads are also investigated. Flap bending moment increases substantially as lift offset increases to carry the hub roll moment even at low collective values. The magnitude of flap bending moment is dictated by the lift-offset value (hub roll moment) but is less sensitive to collective and speed.

  11. Modelling of dynamic contact length in rail grinding process

    NASA Astrophysics Data System (ADS)

    Zhi, Shaodan; Li, Jianyong; Zarembski, A. M.

    2014-09-01

    Rails endure frequent dynamic loads from the passing trains for supporting trains and guiding wheels. The accumulated stress concentrations will cause the plastic deformation of rail towards generating corrugations, contact fatigue cracks and also other defects, resulting in more dangerous status even the derailment risks. So the rail grinding technology has been invented with rotating grinding stones pressed on the rail with defects removal. Such rail grinding works are directed by experiences rather than scientifically guidance, lacking of flexible and scientific operating methods. With grinding control unit holding the grinding stones, the rail grinding process has the characteristics not only the surface grinding but also the running railway vehicles. First of all, it's important to analyze the contact length between the grinding stone and the rail, because the contact length is a critical parameter to measure the grinding capabilities of stones. Moreover, it's needed to build up models of railway vehicle unit bonded with the grinding stone to represent the rail grinding car. Therefore the theoretical model for contact length is developed based on the geometrical analysis. And the calculating models are improved considering the grinding car's dynamic behaviors during the grinding process. Eventually, results are obtained based on the models by taking both the operation parameters and the structure parameters into the calculation, which are suitable for revealing the process of rail grinding by combining the grinding mechanism and the railway vehicle systems.

  12. Extension of the SAFT-VR Mie EoS To Model Homonuclear Rings and Its Parametrization Based on the Principle of Corresponding States.

    PubMed

    Müller, Erich A; Mejía, Andrés

    2017-10-24

    The statistical associating fluid theory of variable range employing a Mie potential (SAFT-VR-Mie) proposed by Lafitte et al. (J. Chem Phys. 2013, 139, 154504) is one of the latest versions of the SAFT family. This particular version has been shown to have a remarkable capability to connect experimental determinations, theoretical calculations, and molecular simulations results. However, the theoretical development restricts the model to chains of beads connected in a linear fashion. In this work, the capabilities of the SAFT-VR Mie equation of state for modeling phase equilibria are extended for the case of planar ring compounds. This modification proposed replaces the Helmholtz energy of chain formation by an empirical contribution based on a parallelism to the second-order thermodynamic perturbation theory for hard sphere trimers. The proposed expression is given in terms of an extra parameter, χ, that depends on the number of beads, m s , and the geometry of the ring. The model is used to describe the phase equilibrium for planar ring compounds formed of Mie isotropic segments for the cases of m s equals to 3, 4, 5 (two configurations), and 7 (two configurations). The resulting molecular model is further parametrized, invoking a corresponding states principle resulting in sets of parameters that can be used indistinctively in theoretical calculations or in molecular simulations without any further refinements. The extent and performance of the methodology has been exemplified by predicting the phase equilibria and vapor pressure curves for aromatic hydrocarbons (benzene, hexafluorobenzene, toluene), heterocyclic molecules (2,5-dimethylfuran, sulfolane, tetrahydro-2H-pyran, tetrahydrofuran), and polycyclic aromatic hydrocarbons (naphthalene, pyrene, anthracene, pentacene, and coronene). An important aspect of the theory is that the parameters of the model can be used directly in molecular dynamics (MD) simulations to calculate equilibrium phase properties and interfacial tensions with an accuracy that rivals other coarse grained and united atom models, for example, liquid densities, are predicted, with a maximum absolute average deviation of 3% from both the theory and the MD simulations, while the interfacial tension is predicted, with a maximum absolute average of 8%. The extension to mixtures is exemplified by considering a binary system of hexane (chain fluid) and tetrahydro-2H-pyran (ring fluid).

  13. An evaluation of TRAC-PF1/MOD1 computer code performance during posttest simulations of Semiscale MOD-2C feedwater line break transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.: Watkins, J.C.

    This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less

  14. Diagnostic capability of spectral-domain optical coherence tomography for glaucoma.

    PubMed

    Wu, Huijuan; de Boer, Johannes F; Chen, Teresa C

    2012-05-01

    To determine the diagnostic capability of spectral-domain optical coherence tomography in glaucoma patients with visual field defects. Prospective, cross-sectional study. Participants were recruited from a university hospital clinic. One eye of 85 normal subjects and 61 glaucoma patients with average visual field mean deviation of -9.61 ± 8.76 dB was selected randomly for the study. A subgroup of the glaucoma patients with early visual field defects was calculated separately. Spectralis optical coherence tomography (Heidelberg Engineering, Inc) circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Overall RNFL thickness had the highest area under the receiver operating characteristic curve values: 0.952 for all patients and 0.895 for the early glaucoma subgroup. For all patients, the highest sensitivity (98.4%; 95% confidence interval, 96.3% to 100%) was achieved by using 2 criteria: ≥ 1 RNFL sectors being abnormal at the < 5% level and overall classification of borderline or outside normal limits, with specificities of 88.9% (95% confidence interval, 84.0% to 94.0%) and 87.1% (95% confidence interval, 81.6% to 92.5%), respectively, for these 2 criteria. Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral-domain optical coherence tomography were good for early perimetric glaucoma and were excellent for moderately advanced perimetric glaucoma. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  16. Comparison of quantitatively analyzed dynamic area-detector CT using various mathematic methods with FDG PET/CT in management of solitary pulmonary nodules.

    PubMed

    Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Fujisawa, Yasuko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro

    2013-06-01

    The objective of our study was to prospectively compare the capability of dynamic area-detector CT analyzed with different mathematic methods and PET/CT in the management of pulmonary nodules. Fifty-two consecutive patients with 96 pulmonary nodules underwent dynamic area-detector CT, PET/CT, and microbacterial or pathologic examinations. All nodules were classified into the following groups: malignant nodules (n = 57), benign nodules with low biologic activity (n = 15), and benign nodules with high biologic activity (n = 24). On dynamic area-detector CT, the total, pulmonary arterial, and systemic arterial perfusions were calculated using the dual-input maximum slope method; perfusion was calculated using the single-input maximum slope method; and extraction fraction and blood volume (BV) were calculated using the Patlak plot method. All indexes were statistically compared among the three nodule groups. Then, receiver operating characteristic analyses were used to compare the diagnostic capabilities of the maximum standardized uptake value (SUVmax) and each perfusion parameter having a significant difference between malignant and benign nodules. Finally, the diagnostic performances of the indexes were compared by means of the McNemar test. No adverse effects were observed in this study. All indexes except extraction fraction and BV, both of which were calculated using the Patlak plot method, showed significant differences among the three groups (p < 0.05). Areas under the curve of total perfusion calculated using the dual-input method, pulmonary arterial perfusion calculated using the dual-input method, and perfusion calculated using the single-input method were significantly larger than that of SUVmax (p < 0.05). The accuracy of total perfusion (83.3%) was significantly greater than the accuracy of the other indexes: pulmonary arterial perfusion (72.9%, p < 0.05), systemic arterial perfusion calculated using the dual-input method (69.8%, p < 0.05), perfusion (66.7%, p < 0.05), and SUVmax (60.4%, p < 0.05). Dynamic area-detector CT analyzed using the dual-input maximum slope method has better potential for the diagnosis of pulmonary nodules than dynamic area-detector CT analyzed using other methods and than PET/CT.

  17. Reactor Pressure Vessel Fracture Analysis Capabilities in Grizzly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Backman, Marie; Chakraborty, Pritam

    2015-03-01

    Efforts have been underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). Development in prior years has resulted a capability to calculate -integrals. For this application, these are used to calculate stress intensity factors for cracks to be used in deterministic linear elastic fracture mechanics (LEFM) assessments of fracture in degraded RPVs. The -integral can only be used to evaluate stress intensity factors for axis-aligned flaws because it can only be used to obtain the stress intensity factor for pure Mode Imore » loading. Off-axis flaws will be subjected to mixed-mode loading. For this reason, work has continued to expand the set of fracture mechanics capabilities to permit it to evaluate off-axis flaws. This report documents the following work to enhance Grizzly’s engineering fracture mechanics capabilities for RPVs: • Interaction Integral and -stress: To obtain mixed-mode stress intensity factors, a capability to evaluate interaction integrals for 2D or 3D flaws has been developed. A -stress evaluation capability has been developed to evaluate the constraint at crack tips in 2D or 3D. Initial verification testing of these capabilities is documented here. • Benchmarking for axis-aligned flaws: Grizzly’s capabilities to evaluate stress intensity factors for axis-aligned flaws have been benchmarked against calculations for the same conditions in FAVOR. • Off-axis flaw demonstration: The newly-developed interaction integral capabilities are demon- strated in an application to calculate the mixed-mode stress intensity factors for off-axis flaws. • Other code enhancements: Other enhancements to the thermomechanics capabilities that relate to the solution of the engineering RPV fracture problem are documented here.« less

  18. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  19. TESTING AUTOMATED SOLAR FLARE FORECASTING WITH 13 YEARS OF MICHELSON DOPPLER IMAGER MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, J. P.; Hoeksema, J. T., E-mail: JMason86@sun.stanford.ed, E-mail: JTHoeksema@sun.stanford.ed

    Flare occurrence is statistically associated with changes in several characteristics of the line-of-sight magnetic field in solar active regions (ARs). We calculated magnetic measures throughout the disk passage of 1075 ARs spanning solar cycle 23 to find a statistical relationship between the solar magnetic field and flares. This expansive study of over 71,000 magnetograms and 6000 flares uses superposed epoch (SPE) analysis to investigate changes in several magnetic measures surrounding flares and ARs completely lacking associated flares. The results were used to seek any flare associated signatures with the capability to recover weak systematic signals with SPE analysis. SPE analysismore » is a method of combining large sets of data series in a manner that yields concise information. This is achieved by aligning the temporal location of a specified flare in each time series, then calculating the statistical moments of the 'overlapping' data. The best-calculated parameter, the gradient-weighted inversion-line length (GWILL), combines the primary polarity inversion line (PIL) length and the gradient across it. Therefore, GWILL is sensitive to complex field structures via the length of the PIL and shearing via the gradient. GWILL shows an average 35% increase during the 40 hr prior to X-class flares, a 16% increase before M-class flares, and 17% increase prior to B-C-class flares. ARs not associated with flares tend to decrease in GWILL during their disk passage. Gilbert and Heidke skill scores are also calculated and show that even GWILL is not a reliable parameter for predicting solar flares in real time.« less

  20. Enhanced Diagnostic Capability for Glaucoma of 3-Dimensional versus 2-Dimensional Neuroretinal Rim Parameters Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Fan, Kenneth Chen; Tsikata, Edem; Khoueir, Ziad; Simavli, Huseyin; Guo, Rong; DeLuna, Regina; Pandit, Sumir; Que, Christian John; de Boer, Johannes F.; Chen, Teresa C.

    2017-01-01

    Purpose To compare the diagnostic capability of 3-dimensional (3D) neuroretinal rim parameters with existing 2-dimensional (2D) neuroretinal and retinal nerve fiber layer (RNFL) thickness rim parameters using spectral domain optical coherence tomography (SD-OCT) volume scans Materials and Methods Design Institutional prospective pilot study. Study population 65 subjects (35 open angle glaucoma patients, 30 normal patients). Observation procedures One eye of each subject was included. SD-OCT was used to obtain 2D retinal nerve fiber layer (RNFL) thickness values and five neuroretinal rim parameters [i.e. 3D minimum distance band (MDB) thickness, 3D Bruch’s membrane opening-minimum rim width (BMO-MRW), 3D rim volume, 2D rim area, and 2D rim thickness]. Main outcome measures Area under the receiver operating characteristic (AUROC) curve values, sensitivity, specificity. Results Comparing all 3D with all 2D parameters, 3D rim parameters (MDB, BMO-MRW, rim volume) generally had higher AUROC curve values (range 0.770–0.946) compared to 2D parameters (RNFL thickness, rim area, rim thickness; range 0.678–0.911). For global region analyses, all 3D rim parameters (BMO-MRW, rim volume, MDB) were equal to or better than 2D parameters (RNFL thickness, rim area, rim thickness; p-values from 0.023–1.0). Among the three 3D rim parameters (MDB, BMO-MRW, and rim volume), there were no significant differences in diagnostic capability (false discovery rate > 0.05 at 95% specificity). Conclusion 3D neuroretinal rim parameters (MDB, BMO-MRW, and rim volume) demonstrated better diagnostic capability for primary and secondary open angle glaucomas compared to 2D neuroretinal parameters (rim area, rim thickness). Compared to 2D RNFL thickness, 3D neuroretinal rim parameters have the same or better diagnostic capability. PMID:28234677

  1. On-line applications of numerical models in the Black Sea GIS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Nikolaidis, M.; Stylianou, Stavros

    2017-09-01

    The Black Sea Geographical Information System (GIS) is developed based on cutting edge information technologies, and provides automated data processing and visualization on-line. Mapserver is used as a mapping service; the data are stored in MySQL DBMS; PHP and Python modules are utilized for data access, processing, and exchange. New numerical models can be incorporated in the GIS environment as individual software modules, compiled for a server-based operational system, providing interaction with the GIS. A common interface allows setting the input parameters; then the model performs the calculation of the output data in specifically predefined files and format. The calculation results are then passed to the GIS for visualization. Initially, a test scenario of integration of a numerical model into the GIS was performed, using software, developed to describe a two-dimensional tsunami propagation in variable basin depth, based on a linear long surface wave model which is legitimate for more than 5 m depth. Furthermore, the well established oil spill and trajectory 3-D model MEDSLIK (http://www.oceanography.ucy.ac.cy/medslik/) was integrated into the GIS with more advanced GIS functionality and capabilities. MEDSLIK is able to forecast and hind cast the trajectories of oil pollution and floating objects, by using meteo-ocean data and the state of oil spill. The MEDSLIK module interface allows a user to enter all the necessary oil spill parameters, i.e. date and time, rate of spill or spill volume, forecasting time, coordinates, oil spill type, currents, wind, and waves, as well as the specification of the output parameters. The entered data are passed on to MEDSLIK; then the oil pollution characteristics are calculated for pre-defined time steps. The results of the forecast or hind cast are then visualized upon a map.

  2. Predicting Critical Power in Elite Cyclists: Questioning the Validity of the 3-Minute All-Out Test.

    PubMed

    Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I

    2017-07-01

    New applications of the critical-power concept, such as the modeling of intermittent-work capabilities, are exciting prospects for elite cycling. However, accurate calculation of the required parameters is traditionally time invasive and somewhat impractical. An alternative single-test protocol (3-min all-out) has recently been proposed, but validation in an elite population is lacking. The traditional approach for parameter establishment, but with fewer tests, could also prove an acceptable compromise. Six senior Australian endurance track-cycling representatives completed 6 efforts to exhaustion on 2 separate days over a 3-wk period. These included 1-, 4-, 6-, 8-, and 10-min self-paced efforts, plus the 3-min all-out protocol. Traditional work-vs-time calculations of CP and anaerobic energy contribution (W') using the 5 self-paced efforts were compared with calculations from the 3-min all-out protocol. The impact of using just 2 or 3 self-paced efforts for traditional CP and W' estimation was also explored using thresholds of agreement (8 W, 2.0 kJ, respectively). CP estimated from the 3-min all-out approach was significantly higher than from the traditional approach (402 ± 33, 351 ± 27 W, P < .001), while W' was lower (15.5 ± 3.0, 24.3 ± 4.0 kJ, P = .02). Five different combinations of 2 or 3 self-paced efforts led to CP estimates within the threshold of agreement, with only 1 combination deemed accurate for W'. In elite cyclists the 3-min all-out approach is not suitable to estimate CP when compared with the traditional method. However, reducing the number of tests used in the traditional method lessens testing burden while maintaining appropriate parameter accuracy.

  3. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac.

    PubMed

    Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen

    2016-04-01

    To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.

  4. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen

    Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less

  5. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  6. A computer program for the calculation of the flow field including boundary layer effects for mixed-compression inlets at angle of attack

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Hoffman, J. D.

    1982-01-01

    A computer program was developed which is capable of calculating the flow field in the supersonic portion of a mixed compression aircraft inlet operating at angle of attack. The supersonic core flow is computed using a second-order three dimensional method-of-characteristics algorithm. The bow shock and the internal shock train are treated discretely using a three dimensional shock fitting procedure. The boundary layer flows are computed using a second-order implicit finite difference method. The shock wave-boundary layer interaction is computed using an integral formulation. The general structure of the computer program is discussed, and a brief description of each subroutine is given. All program input parameters are defined, and a brief discussion on interpretation of the output is provided. A number of sample cases, complete with data listings, are provided.

  7. Beta Testing of CFD Code for the Analysis of Combustion Systems

    NASA Technical Reports Server (NTRS)

    Yee, Emma; Wey, Thomas

    2015-01-01

    A preliminary version of OpenNCC was tested to assess its accuracy in generating steady-state temperature fields for combustion systems at atmospheric conditions using three-dimensional tetrahedral meshes. Meshes were generated from a CAD model of a single-element lean-direct injection combustor, and the latest version of OpenNCC was used to calculate combustor temperature fields. OpenNCC was shown to be capable of generating sustainable reacting flames using a tetrahedral mesh, and the subsequent results were compared to experimental results. While nonreacting flow results closely matched experimental results, a significant discrepancy was present between the code's reacting flow results and experimental results. When wide air circulation regions with high velocities were present in the model, this appeared to create inaccurately high temperature fields. Conversely, low recirculation velocities caused low temperature profiles. These observations will aid in future modification of OpenNCC reacting flow input parameters to improve the accuracy of calculated temperature fields.

  8. Inverse magnetic catalysis from improved holographic QCD in the Veneziano limit

    NASA Astrophysics Data System (ADS)

    Gürsoy, Umut; Iatrakis, Ioannis; Järvinen, Matti; Nijs, Govert

    2017-03-01

    We study the dependence of the chiral condensate on external magnetic field in the context of holographic QCD at large number of flavors. We consider a holographic QCD model where the flavor degrees of freedom fully backreact on the color dynamics. Perturbative QCD calculations have shown that B acts constructively on the chiral condensate, a phenomenon called "magnetic catalysis". In contrast, recent lattice calculations show that, depending on the number of flavors and temperature, the magnetic field may also act destructively, which is called "inverse magnetic catalysis". Here we show that the holographic theory is capable of both behaviors depending on the choice of parameters. For reasonable choice of the potentials entering the model we find qualitative agreement with the lattice expectations. Our results provide insight for the physical reasons behind the inverse magnetic catalysis. In particular, we argue that the backreaction of the flavors to the background geometry decatalyzes the condensate.

  9. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  10. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  11. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  12. A Design Study of the Inflated Sphere Landing Vehicle, Including the Landing Performance and the Effects of Deviations from Design Conditions

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1961-01-01

    The impact motion of the inflated sphere landing vehicle with a payload centrally supported from the spherical skin by numerous cords has been determined on the assumption of uniform isentropic gas compression during impact. The landing capabilities are determined for a system containing suspension cords of constant cross section. The effects of deviations in impact velocity and initial gas temperature from the design conditions are studied. Also discussed are the effects of errors in the time at which the skin is ruptured. These studies indicate how the design parameters should be chosen to insure reliability of the landing system. Calculations have been made and results are presented for a sphere inflated with hydrogen, landing on the moon in the absence of an atmosphere. The results are presented for one value of the skin-strength parameter.

  13. Illusion optics: Optically transforming the nature and the location of electromagnetic emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianjia; Tichit, Paul-Henri; Burokur, Shah Nawaz, E-mail: shah-nawaz.burokur@u-psud.fr

    Complex electromagnetic structures can be designed by using the powerful concept of transformation electromagnetics. In this study, we define a spatial coordinate transformation that shows the possibility of designing a device capable of producing an illusion on an antenna radiation pattern. Indeed, by compressing the space containing a radiating element, we show that it is able to change the radiation pattern and to make the radiation location appear outside the latter space. Both continuous and discretized models with calculated electromagnetic parameter values are presented. A reduction of the electromagnetic material parameters is also proposed for a possible physical fabrication ofmore » the device with achievable values of permittivity and permeability that can be obtained from existing well-known metamaterials. Following that, the design of the proposed antenna using a layered metamaterial is presented. Full wave numerical simulations using Finite Element Method are performed to demonstrate the performances of such a device.« less

  14. ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.

    1999-03-01

    ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less

  15. Note: Design and capability verification of fillet triangle flexible support

    NASA Astrophysics Data System (ADS)

    Wang, Tao; San, Xiao-Gang; Gao, Shi-Jie; Wang, Jing; Ni, Ying-Xue; Sang, Zhi-Xin

    2017-12-01

    By increasing the section thickness of a triangular flexible hinge, this study focuses on optimal selection of parameters of fillet triangle flexible hinges and flexible support. Based on Castigliano's second theorem, the flexibility expression of the fillet triangle flexible hinge was derived. Then, the case design is performed, and the comparison of three types of flexible hinges with this type of flexible hinge was carried out. The finite element models of fillet triangle flexible hinges and flexible support were built, and then the simulation results of performance parameters were calculated. Finally, the experiment platform was established to validate analysis results. The maximum error is less than 8%, which verifies the accuracy of the simulation process and equations derived; also the fundamental frequency fits the requirements of the system. The fillet triangle flexible hinge is proved to have the advantages of high precision and low flexibility.

  16. Note: Design and capability verification of fillet triangle flexible support.

    PubMed

    Wang, Tao; San, Xiao-Gang; Gao, Shi-Jie; Wang, Jing; Ni, Ying-Xue; Sang, Zhi-Xin

    2017-12-01

    By increasing the section thickness of a triangular flexible hinge, this study focuses on optimal selection of parameters of fillet triangle flexible hinges and flexible support. Based on Castigliano's second theorem, the flexibility expression of the fillet triangle flexible hinge was derived. Then, the case design is performed, and the comparison of three types of flexible hinges with this type of flexible hinge was carried out. The finite element models of fillet triangle flexible hinges and flexible support were built, and then the simulation results of performance parameters were calculated. Finally, the experiment platform was established to validate analysis results. The maximum error is less than 8%, which verifies the accuracy of the simulation process and equations derived; also the fundamental frequency fits the requirements of the system. The fillet triangle flexible hinge is proved to have the advantages of high precision and low flexibility.

  17. Electrodynamic Bare Tether Systems as a Thruster for the Momentum-Exchange/Electrodynamic Reboost(MXER)Project

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Krivorutsky, E. N.; Gallagher, D. L.

    2006-01-01

    The concept of electrodynamic tether propulsion has a number of attractive features and has been widely discussed for different applications. Different system designs have been proposed and compared during the last 10 years. In spite of this, the choice of proper design for any particular mission is a unique problem. Such characteristics of tether performance as system acceleration, efficiency, etc., should be calculated and compared on the basis of the known capability of a tether to collect electrical current. We discuss the choice of parameters for circular and tape tethers with regard to the Momentum-Exchange/Electrodynamic Reboost (MXER) tether project.

  18. Health Education Telecommunications Experiment

    NASA Technical Reports Server (NTRS)

    Whalen, A. A.

    1975-01-01

    The Health/Education Telecommunications Experiment carried out with Applications Technology Satellite-6 is described. The experiment tested the effectiveness of color television broadcasts to over 120 low-cost receivers in rural areas. Five types of earth stations were involved: receive-only terminals (ROT), an intensive terminal consisting of the ROT plus a VHF transmitter and receiver; comprehensive S and C-band terminals having the capability of transmitting the video signal plus four audio channels; and the main originating stations. Additional supporting elements comprise 120 video receive terminals, 51 telephony transceivers, and 8 video originating terminals of 3 different parts. Technical parameters were measured to within 1 dB of the calculated values.

  19. Vortex gas lens

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.; Berschauer, Andrew; Parker, Timothy W.; Vickers, Jesse E.

    1989-01-01

    A vortex gas lens concept is presented. Such a lens has a potential power density capability of 10 to the 9th - 10 to the 10th w/sq cm. An experimental prototype was constructed, and the divergence half angle of the exiting beam was measured as a function of the lens operating parameters. Reasonably good agreement is found between the experimental results and theoretical calculations. The expanded beam was observed to be steady, and no strong, potentially beam-degrading jets were found to issue from the ends of the lens. Estimates of random beam deflection angles to be expected due to boundary layer noise are presented; these angles are very small.

  20. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  1. Solar Prominence Modelling and Plasma Diagnostics at ALMA Wavelengths

    NASA Astrophysics Data System (ADS)

    Rodger, Andrew; Labrosse, Nicolas

    2017-09-01

    Our aim is to test potential solar prominence plasma diagnostics as obtained with the new solar capability of the Atacama Large Millimeter/submillimeter Array (ALMA). We investigate the thermal and plasma diagnostic potential of ALMA for solar prominences through the computation of brightness temperatures at ALMA wavelengths. The brightness temperature, for a chosen line of sight, is calculated using the densities of electrons, hydrogen, and helium obtained from a radiative transfer code under non-local thermodynamic equilibrium (non-LTE) conditions, as well as the input internal parameters of the prominence model in consideration. Two distinct sets of prominence models were used: isothermal-isobaric fine-structure threads, and large-scale structures with radially increasing temperature distributions representing the prominence-to-corona transition region. We compute brightness temperatures over the range of wavelengths in which ALMA is capable of observing (0.32 - 9.6 mm), however, we particularly focus on the bands available to solar observers in ALMA cycles 4 and 5, namely 2.6 - 3.6 mm (Band 3) and 1.1 - 1.4 mm (Band 6). We show how the computed brightness temperatures and optical thicknesses in our models vary with the plasma parameters (temperature and pressure) and the wavelength of observation. We then study how ALMA observables such as the ratio of brightness temperatures at two frequencies can be used to estimate the optical thickness and the emission measure for isothermal and non-isothermal prominences. From this study we conclude that for both sets of models, ALMA presents a strong thermal diagnostic capability, provided that the interpretation of observations is supported by the use of non-LTE simulation results.

  2. Active and Passive 3D Vector Radiative Transfer with Preferentially-Aligned Ice Particles

    NASA Astrophysics Data System (ADS)

    Adams, I. S.; Munchak, S. J.; Pelissier, C.; Kuo, K. S.; Heymsfield, G. M.

    2017-12-01

    To support the observation of clouds and precipitation using combinations of radars and radiometers, a forward model capable of representing diverse sensing geometries for active and passive instruments is necessary for correctly interpreting and consistently combining multi-sensor measurements from ground-based, airborne, and spaceborne platforms. As such, the Atmospheric Radiative Transfer Simulator (ARTS) uses Monte Carlo integration to produce radar reflectivities and radiometric brightness temperatures for three-dimensional cloud and precipitation input fields. This radiative transfer framework is capable of efficiently sampling Gaussian antenna beams and fully accounting for multiple scattering. By relying on common ray-tracing tools, gaseous absorption models, and scattering properties, the model reproduces accurate and consistent radar and radiometer observables. While such a framework is an important component for simulating remote sensing observables, the key driver for self-consistent radiative transfer calculations of clouds and precipitation is scattering data. Research over the past decade has demonstrated that spheroidal models of frozen hydrometeors cannot accurately reproduce all necessary scattering properties at all desired frequencies. The discrete dipole approximation offers flexibility in calculating scattering for arbitrary particle geometries, but at great computational expense. When considering scattering for certain pristine ice particles, the Extended Boundary Condition Method, or T-Matrix, is much more computationally efficient; however, convergence for T-Matrix calculations fails at large size parameters and high aspect ratios. To address these deficiencies, we implemented the Invariant Imbedding T-Matrix Method (IITM). A brief overview of ARTS and IITM will be given, including details for handling preferentially-aligned hydrometeors. Examples highlighting the performance of the model for simulating space-based and airborne measurements will be offered, and some case studies showing the response to particle type and orientation will be presented. Simulations of polarized radar (Z, LDR, ZDR) and radiometer (Stokes I and Q) quantities will be used to demonstrate the capabilities of the model.

  3. Ab initio random structure searching of organic molecular solids: assessment and validation against experimental data† †Electronic supplementary information (ESI) available: Results of similarity analysis between the 11 structures of lowest energy obtained in the AIRSS calculations and the reported structures of form III and form IV of m-ABA; unit cell parameters and volumes for all structures considered; comparison of 2θ values derived from the unit cell parameters of different structural models representing form III of m-ABA; Le Bail fitting of the experimental powder XRD pattern of form IV of m-ABA recorded at 70 K using, as the initial structural model, the reported crystal structure following geometry optimization; table of calculated (GIPAW) absolute isotropic NMR shieldings; simulated powder XRD data for the considered structures after precise geometry optimization; experimental 1H MAS NMR spectra of forms III and IV. (pdf) The calculated and experimental data for this study are provided as a supporting dataset from WRAP, the Warwick Research Archive Portal at http://wrap.warwick.ac.uk/91884. See DOI: 10.1039/c7cp04186a

    PubMed Central

    Zilka, Miri; Dudenko, Dmytro V.; Hughes, Colan E.; Williams, P. Andrew; Sturniolo, Simone; Franks, W. Trent; Pickard, Chris J.

    2017-01-01

    This paper explores the capability of using the DFT-D ab initio random structure searching (AIRSS) method to generate crystal structures of organic molecular materials, focusing on a system (m-aminobenzoic acid; m-ABA) that is known from experimental studies to exhibit abundant polymorphism. Within the structural constraints selected for the AIRSS calculations (specifically, centrosymmetric structures with Z = 4 for zwitterionic m-ABA molecules), the method is shown to successfully generate the two known polymorphs of m-ABA (form III and form IV) that have these structural features. We highlight various issues that are encountered in comparing crystal structures generated by AIRSS to experimental powder X-ray diffraction (XRD) data and solid-state magic-angle spinning (MAS) NMR data, demonstrating successful fitting for some of the lowest energy structures from the AIRSS calculations against experimental low-temperature powder XRD data for known polymorphs of m-ABA, and showing that comparison of computed and experimental solid-state NMR parameters allows different hydrogen-bonding motifs to be discriminated. PMID:28944393

  4. Accelerating potential of mean force calculations for lipid membrane permeation: System size, reaction coordinate, solute-solute distance, and cutoffs

    NASA Astrophysics Data System (ADS)

    Nitschke, Naomi; Atkovska, Kalina; Hub, Jochen S.

    2016-09-01

    Molecular dynamics simulations are capable of predicting the permeability of lipid membranes for drug-like solutes, but the calculations have remained prohibitively expensive for high-throughput studies. Here, we analyze simple measures for accelerating potential of mean force (PMF) calculations of membrane permeation, namely, (i) using smaller simulation systems, (ii) simulating multiple solutes per system, and (iii) using shorter cutoffs for the Lennard-Jones interactions. We find that PMFs for membrane permeation are remarkably robust against alterations of such parameters, suggesting that accurate PMF calculations are possible at strongly reduced computational cost. In addition, we evaluated the influence of the definition of the membrane center of mass (COM), used to define the transmembrane reaction coordinate. Membrane-COM definitions based on all lipid atoms lead to artifacts due to undulations and, consequently, to PMFs dependent on membrane size. In contrast, COM definitions based on a cylinder around the solute lead to size-independent PMFs, down to systems of only 16 lipids per monolayer. In summary, compared to popular setups that simulate a single solute in a membrane of 128 lipids with a Lennard-Jones cutoff of 1.2 nm, the measures applied here yield a speedup in sampling by factor of ˜40, without reducing the accuracy of the calculated PMF.

  5. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  6. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  7. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  8. MODTRAN2: Evolution and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, G.P.; Chetwynd, J.H.; Kneizys, F.X.

    1994-12-31

    MODTRAN2 is the most recent version of the Moderate Resolution Atmospheric Radiance and Transmittance Model. It encompasses all the capabilities of LOWTRAN 7, the historic 20 cm{sup {minus}1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup {minus}1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Because the band model parameters and their applications to transmittance calculations have been independently developed using equivalent width binning procedures, validation against full Voigt line-by-line calculations is important. Extensive spectralmore » comparisons have shown excellent agreement. In addition, simple timing runs of MODTRAN vs. FASCOD3P show an improvement of more than a factor of 100 for a typical 500 cm{sup {minus}1} spectral interval and comparable vertical layering. It has been previously established that not only is MODTRAN an excellent band model for full path calculations, but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies. Enhancements expected to appear in MODTRAN3 relate directly to climate change studies. The addition of ultraviolet SO{sub 2} and NO{sub 2} in the UV, along with upgraded ozone Chappuis bands in the visible will also be part of MODTRAN3.« less

  9. Implementation of equilibrium aqueous speciation and solubility (EQ3 type) calculations into Cantera for electrolyte solutions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moffat, Harry K.; Jove-Colon, Carlos F.

    2009-06-01

    In this report, we summarize our work on developing a production level capability for modeling brine thermodynamic properties using the open-source code Cantera. This implementation into Cantera allows for the application of chemical thermodynamics to describe the interactions between a solid and an electrolyte solution at chemical equilibrium. The formulations to evaluate the thermodynamic properties of electrolytes are based on Pitzer's model to calculate molality-based activity coefficients using a real equation-of-state (EoS) for water. In addition, the thermodynamic properties of solutes at elevated temperature and pressures are computed using the revised Helgeson-Kirkham-Flowers (HKF) EoS for ionic and neutral aqueous species.more » The thermodynamic data parameters for the Pitzer formulation and HKF EoS are from the thermodynamic database compilation developed for the Yucca Mountain Project (YMP) used with the computer code EQ3/6. We describe the adopted equations and their implementation within Cantera and also provide several validated examples relevant to the calculations of extensive properties of electrolyte solutions.« less

  10. Modelling of electronic excitation and radiation in the Direct Simulation Monte Carlo Macroscopic Chemistry Method

    NASA Astrophysics Data System (ADS)

    Goldsworthy, M. J.

    2012-10-01

    One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.

  11. Application of the CIEMAT-NIST method to plastic scintillation microspheres.

    PubMed

    Tarancón, A; Barrera, J; Santiago, L M; Bagán, H; García, J F

    2015-04-01

    An adaptation of the MICELLE2 code was used to apply the CIEMAT-NIST tracing method to the activity calculation for radioactive solutions of pure beta emitters of different energies using plastic scintillation microspheres (PSm) and (3)H as a tracing radionuclide. Particle quenching, very important in measurements with PSm, was computed with PENELOPE using geometries formed by a heterogeneous mixture of polystyrene microspheres and water. The results obtained with PENELOPE were adapted to be included in MICELLE2, which is capable of including the energy losses due to particle quenching in the computation of the detection efficiency. The activity calculation of (63)Ni, (14)C, (36)Cl and (90)Sr/(90)Y solutions was performed with deviations of 8.8%, 1.9%, 1.4% and 2.1%, respectively. Of the different parameters evaluated, those with the greatest impact on the activity calculation are, in order of importance, the energy of the radionuclide, the degree of quenching of the sample and the packing fraction of the geometry used in the computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Fast local-MP2 method with density-fitting for crystals. II. Test calculations and application to the carbon dioxide crystal

    NASA Astrophysics Data System (ADS)

    Usvyat, Denis; Maschio, Lorenzo; Manby, Frederick R.; Casassa, Silvia; Schütz, Martin; Pisani, Cesare

    2007-08-01

    A density fitting scheme for calculating electron repulsion integrals used in local second order Møller-Plesset perturbation theory for periodic systems (DFP) is presented. Reciprocal space techniques are systematically adopted, for which the use of Poisson fitting functions turned out to be instrumental. The role of the various parameters (truncation thresholds, density of the k net, Coulomb versus overlap metric, etc.) on computational times and accuracy is explored, using as test cases primitive-cell- and conventional-cell-diamond, proton-ordered ice, crystalline carbon dioxide, and a three-layer slab of magnesium oxide. Timings and results obtained when the electron repulsion integrals are calculated without invoking the DFP approximation, are taken as the reference. It is shown that our DFP scheme is both accurate and very efficient once properly calibrated. The lattice constant and cohesion energy of the CO2 crystal are computed to illustrate the capabilities of providing a physically correct description also for weakly bound crystals, in strong contrast to present density functional approaches.

  13. Calibration Laboratory Capabilities Listing as of April 2009

    NASA Technical Reports Server (NTRS)

    Kennedy, Gary W.

    2009-01-01

    This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.

  14. Anaerobic Degradation of Phthalate Isomers by Methanogenic Consortia

    PubMed Central

    Kleerebezem, Robbert; Pol, Look W. Hulshoff; Lettinga, Gatze

    1999-01-01

    Three methanogenic enrichment cultures, grown on ortho-phthalate, iso-phthalate, or terephthalate were obtained from digested sewage sludge or methanogenic granular sludge. Cultures grown on one of the phthalate isomers were not capable of degrading the other phthalate isomers. All three cultures had the ability to degrade benzoate. Maximum specific growth rates (μSmax) and biomass yields (YXtotS) of the mixed cultures were determined by using both the phthalate isomers and benzoate as substrates. Comparable values for these parameters were found for all three cultures. Values for μSmax and YXtotS were higher for growth on benzoate compared to the phthalate isomers. Based on measured and estimated values for the microbial yield of the methanogens in the mixed culture, specific yields for the phthalate and benzoate fermenting organisms were calculated. A kinetic model, involving three microbial species, was developed to predict intermediate acetate and hydrogen accumulation and the final production of methane. Values for the ratio of the concentrations of methanogenic organisms, versus the phthalate isomer and benzoate fermenting organisms, and apparent half-saturation constants (KS) for the methanogens were calculated. By using this combination of measured and estimated parameter values, a reasonable description of intermediate accumulation and methane formation was obtained, with the initial concentration of phthalate fermenting organisms being the only variable. The energetic efficiency for growth of the fermenting organisms on the phthalate isomers was calculated to be significantly smaller than for growth on benzoate. PMID:10049876

  15. Effective field theory approach to trans-TeV supersymmetry: covariant matching, Yukawa unification and Higgs couplings

    NASA Astrophysics Data System (ADS)

    Wells, James D.; Zhang, Zhengkang

    2018-05-01

    Dismissing traditional naturalness concerns while embracing the Higgs boson mass measurement and unification motivates careful analysis of trans-TeV supersymmetric theories. We take an effective field theory (EFT) approach, matching the Minimal Supersymmetric Standard Model (MSSM) onto the Standard Model (SM) EFT by integrating out heavy superpartners, and evolving MSSM and SMEFT parameters according to renormalization group equations in each regime. Our matching calculation is facilitated by the recent covariant diagrams formulation of functional matching techniques, with the full one-loop SUSY threshold corrections encoded in just 30 diagrams. Requiring consistent matching onto the SMEFT with its parameters (those in the Higgs potential in particular) measured at low energies, and in addition requiring unification of bottom and tau Yukawa couplings at the scale of gauge coupling unification, we detail the solution space of superpartner masses from the TeV scale to well above. We also provide detailed views of parameter space where Higgs coupling measurements have probing capability at future colliders beyond the reach of direct superpartner searches at the LHC.

  16. A Low Cost Device for Monitoring the Urine Output of Critical Care Patients

    PubMed Central

    Otero, Abraham; Palacios, Francisco; Akinfiev, Teodor; Apalkov, Andrey

    2010-01-01

    In critical care units most of the patients’ physiological parameters are sensed by commercial monitoring devices. These devices can also supervise whether the values of the parameters lie within a pre-established range set by the clinician. The automation of the sensing and supervision tasks has discharged the healthcare staff of a considerable workload and avoids human errors, which are common in repetitive and monotonous tasks. Urine output is very likely the most relevant physiological parameter that has yet to be sensed or supervised automatically. This paper presents a low cost patent-pending device capable of sensing and supervising urine output. The device uses reed switches activated by a magnetic float in order to measure the amount of urine collected in two containers which are arranged in cascade. When either of the containers fills, it is emptied automatically using a siphon mechanism and urine begins to collect again. An electronic unit sends the state of the reed switches via Bluetooth to a PC that calculates the urine output from this information and supervises the achievement of therapeutic goals. PMID:22163495

  17. A low cost device for monitoring the urine output of critical care patients.

    PubMed

    Otero, Abraham; Palacios, Francisco; Akinfiev, Teodor; Apalkov, Andrey

    2010-01-01

    In critical care units most of the patients' physiological parameters are sensed by commercial monitoring devices. These devices can also supervise whether the values of the parameters lie within a pre-established range set by the clinician. The automation of the sensing and supervision tasks has discharged the healthcare staff of a considerable workload and avoids human errors, which are common in repetitive and monotonous tasks. Urine output is very likely the most relevant physiological parameter that has yet to be sensed or supervised automatically. This paper presents a low cost patent-pending device capable of sensing and supervising urine output. The device uses reed switches activated by a magnetic float in order to measure the amount of urine collected in two containers which are arranged in cascade. When either of the containers fills, it is emptied automatically using a siphon mechanism and urine begins to collect again. An electronic unit sends the state of the reed switches via Bluetooth to a PC that calculates the urine output from this information and supervises the achievement of therapeutic goals.

  18. High efficiency inductive output tubes with intense annular electron beams

    NASA Astrophysics Data System (ADS)

    Appanam Karakkad, J.; Matthew, D.; Ray, R.; Beaudoin, B. L.; Narayan, A.; Nusinovich, G. S.; Ting, A.; Antonsen, T. M.

    2017-10-01

    For mobile ionospheric heaters, it is necessary to develop highly efficient RF sources capable of delivering radiation in the frequency range from 3 to 10 MHz with an average power at a megawatt level. A promising source, which is capable of offering these parameters, is a grid-less version of the inductive output tube (IOT), also known as a klystrode. In this paper, studies analyzing the efficiency of grid-less IOTs are described. The basic trade-offs needed to reach high efficiency are investigated. In particular, the trade-off between the peak current and the duration of the current micro-pulse is analyzed. A particle in the cell code is used to self-consistently calculate the distribution in axial and transverse momentum and in total electron energy from the cathode to the collector. The efficiency of IOTs with collectors of various configurations is examined. It is shown that the efficiency of IOTs can be in the 90% range even without using depressed collectors.

  19. Modeling the internal combustion engine

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.; Mcbride, B. J.

    1985-01-01

    A flexible and computationally economical model of the internal combustion engine was developed for use on large digital computer systems. It is based on a system of ordinary differential equations for cylinder-averaged properties. The computer program is capable of multicycle calculations, with some parameters varying from cycle to cycle, and has restart capabilities. It can accommodate a broad spectrum of reactants, permits changes in physical properties, and offers a wide selection of alternative modeling functions without any reprogramming. It readily adapts to the amount of information available in a particular case because the model is in fact a hierarchy of five models. The models range from a simple model requiring only thermodynamic properties to a complex model demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. Among its many features the model includes heat transfer, valve timing, supercharging, motoring, finite burning rates, cycle-to-cycle variations in air-fuel ratio, humid air, residual and recirculated exhaust gas, and full combustion kinetics.

  20. Confined turbulent swirling recirculating flow predictions. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Abujelala, M. T.; Lilley, D. G.

    1985-01-01

    The capability and the accuracy of the STARPIC computer code in predicting confined turbulent swirling recirculating flows is presented. Inlet flow boundary conditions were demonstrated to be extremely important in simulating a flowfield via numerical calculations. The degree of swirl strength and expansion ratio have strong effects on the characteristics of swirling flow. In a nonswirling flow, a large corner recirculation zone exists in the flowfield with an expansion ratio greater than one. However, as the degree of inlet swirl increases, the size of this zone decreases and a central recirculation zone appears near the inlet. Generally, the size of the central zone increased with swirl strength and expansion ratio. Neither the standard k-epsilon turbulence mode nor its previous extensions show effective capability for predicting confined turbulent swirling recirculating flows. However, either reduced optimum values of three parameters in the mode or the empirical C sub mu formulation obtained via careful analysis of available turbulence measurements, can provide more acceptable accuracy in the prediction of these swirling flows.

  1. Finding the Needles in the Haystacks: High-Fidelity Models of the Modern and Archean Solar System for Simulating Exoplanet Observations

    NASA Technical Reports Server (NTRS)

    Roberge, Aki; Rizzo, Maxime J.; Lincowski, Andrew P.; Arney, Giada N.; Stark, Christopher C.; Robinson, Tyler D.; Snyder, Gregory F.; Pueyo, Laurent; Zimmerman, Neil T.; Jansen, Tiffany; hide

    2017-01-01

    We present two state-of-the-art models of the solar system, one corresponding to the present day and one to the Archean Eon 3.5 billion years ago. Each model contains spatial and spectral information for the star, the planets, and the interplanetary dust, extending to 50 au from the Sun and covering the wavelength range 0.3-2.5 micron. In addition, we created a spectral image cube representative of the astronomical backgrounds that will be seen behind deep observations of extrasolar planetary systems, including galaxies and Milky Way stars. These models are intended as inputs to high-fidelity simulations of direct observations of exoplanetary systems using telescopes equipped with high-contrast capability. They will help improve the realism of observation and instrument parameters that are required inputs to statistical observatory yield calculations, as well as guide development of post-processing algorithms for telescopes capable of directly imaging Earth-like planets.

  2. Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Johnson, Seth R.; Remec, Igor

    2015-01-01

    Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less

  3. Tsunami Generation Modelling for Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Annunziato, A.; Matias, L.; Ulutas, E.; Baptista, M. A.; Carrilho, F.

    2009-04-01

    In the frame of a collaboration between the European Commission Joint Research Centre and the Institute of Meteorology in Portugal, a complete analytical tool to support Early Warning Systems is being developed. The tool will be part of the Portuguese National Early Warning System and will be used also in the frame of the UNESCO North Atlantic Section of the Tsunami Early Warning System. The system called Tsunami Analysis Tool (TAT) includes a worldwide scenario database that has been pre-calculated using the SWAN-JRC code (Annunziato, 2007). This code uses a simplified fault generation mechanism and the hydraulic model is based on the SWAN code (Mader, 1988). In addition to the pre-defined scenario, a system of computers is always ready to start a new calculation whenever a new earthquake is detected by the seismic networks (such as USGS or EMSC) and is judged capable to generate a Tsunami. The calculation is performed using minimal parameters (epicentre and the magnitude of the earthquake): the programme calculates the rupture length and rupture width by using empirical relationship proposed by Ward (2002). The database calculations, as well the newly generated calculations with the current conditions are therefore available to TAT where the real online analysis is performed. The system allows to analyze also sea level measurements available worldwide in order to compare them and decide if a tsunami is really occurring or not. Although TAT, connected with the scenario database and the online calculation system, is at the moment the only software that can support the tsunami analysis on a global scale, we are convinced that the fault generation mechanism is too simplified to give a correct tsunami prediction. Furthermore short tsunami arrival times especially require a possible earthquake source parameters data on tectonic features of the faults like strike, dip, rake and slip in order to minimize real time uncertainty of rupture parameters. Indeed the earthquake parameters available right after an earthquake are preliminary and could be inaccurate. Determining which earthquake source parameters would affect the initial height and time series of tsunamis will show the sensitivity of the tsunami time series to seismic source details. Therefore a new fault generation model will be adopted, according to the seismotectonics properties of the different regions, and finally included in the calculation scheme. In order to do this, within the collaboration framework of Portuguese authorities, a new model is being defined, starting from the seismic sources in the North Atlantic, Caribbean and Gulf of Cadiz. As earthquakes occurring in North Atlantic and Caribbean sources may affect Portugal mainland, the Azores and Madeira archipelagos also these sources will be included in the analysis. Firstly we have started to examine the geometries of those sources that spawn tsunamis to understand the effect of fault geometry and depths of earthquakes. References: Annunziato, A., 2007. The Tsunami Assesment Modelling System by the Joint Research Center, Science of Tsunami Hazards, Vol. 26, pp. 70-92. Mader, C.L., 1988. Numerical modelling of water waves, University of California Press, Berkeley, California. Ward, S.N., 2002. Tsunamis, Encyclopedia of Physical Science and Technology, Vol. 17, pp. 175-191, ed. Meyers, R.A., Academic Press.

  4. Grizzly Staus Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Zhang, Yongfeng; Chakraborty, Pritam

    2014-09-01

    This report summarizes work during FY 2014 to develop capabilities to predict embrittlement of reactor pressure vessel steel, and to assess the response of embrittled reactor pressure vessels to postulated accident conditions. This work has been conducted a three length scales. At the engineering scale, 3D fracture mechanics capabilities have been developed to calculate stress intensities and fracture toughnesses, to perform a deterministic assessment of whether a crack would propagate at the location of an existing flaw. This capability has been demonstrated on several types of flaws in a generic reactor pressure vessel model. Models have been developed at themore » scale of fracture specimens to develop a capability to determine how irradiation affects the fracture toughness of material. Verification work has been performed on a previously-developed model to determine the sensitivity of the model to specimen geometry and size effects. The effects of irradiation on the parameters of this model has been investigated. At lower length scales, work has continued in an ongoing to understand how irradiation and thermal aging affect the microstructure and mechanical properties of reactor pressure vessel steel. Previously-developed atomistic kinetic monte carlo models have been further developed and benchmarked against experimental data. Initial work has been performed to develop models of nucleation in a phase field model. Additional modeling work has also been performed to improve the fundamental understanding of the formation mechanisms and stability of matrix defects caused.« less

  5. Verification of dosimetric accuracy on the TrueBeam STx: rounded leaf effect of the high definition MLC.

    PubMed

    Kielar, Kayla N; Mok, Ed; Hsu, Annie; Wang, Lei; Luxton, Gary

    2012-10-01

    The dosimetric leaf gap (DLG) in the Varian Eclipse treatment planning system is determined during commissioning and is used to model the effect of the rounded leaf-end of the multileaf collimator (MLC). This parameter attempts to model the physical difference between the radiation and light field and account for inherent leakage between leaf tips. With the increased use of single fraction high dose treatments requiring larger monitor units comes an enhanced concern in the accuracy of leakage calculations, as it accounts for much of the patient dose. This study serves to verify the dosimetric accuracy of the algorithm used to model the rounded leaf effect for the TrueBeam STx, and describes a methodology for determining best-practice parameter values, given the novel capabilities of the linear accelerator such as flattening filter free (FFF) treatments and a high definition MLC (HDMLC). During commissioning, the nominal MLC position was verified and the DLG parameter was determined using MLC-defined field sizes and moving gap tests, as is common in clinical testing. Treatment plans were created, and the DLG was optimized to achieve less than 1% difference between measured and calculated dose. The DLG value found was tested on treatment plans for all energies (6 MV, 10 MV, 15 MV, 6 MV FFF, 10 MV FFF) and modalities (3D conventional, IMRT, conformal arc, VMAT) available on the TrueBeam STx. The DLG parameter found during the initial MLC testing did not match the leaf gap modeling parameter that provided the most accurate dose delivery in clinical treatment plans. Using the physical leaf gap size as the DLG for the HDMLC can lead to 5% differences in measured and calculated doses. Separate optimization of the DLG parameter using end-to-end tests must be performed to ensure dosimetric accuracy in the modeling of the rounded leaf ends for the Eclipse treatment planning system. The difference in leaf gap modeling versus physical leaf gap dimensions is more pronounced in the more recent versions of Eclipse for both the HDMLC and the Millennium MLC. Once properly commissioned and tested using a methodology based on treatment plan verification, Eclipse is able to accurately model radiation dose delivered for SBRT treatments using the TrueBeam STx.

  6. A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays

    NASA Astrophysics Data System (ADS)

    Guo, Y. J.; Lee, K. J.; Caballero, R. N.

    2018-04-01

    The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.

  7. Investigation of Patient-Specific Cerebral Aneurysm using Volumetric PIV, CFD, and In Vitro PC-MRI

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Dickerhoff, Ben; Saloner, David; Rayz, Vitaliy; Vlachos, Pavlos

    2017-11-01

    4D PC-MRI is a modality capable of providing time-resolved velocity fields in cerebral aneurysms in vivo. The MRI-measured velocities and subsequent hemodynamic parameters such as wall shear stress, and oscillatory shear index, can help neurosurgeons decide a course of treatment for a patient, e.g. whether to treat or monitor the aneurysm. However, low spatiotemporal resolution, limited velocity dynamic range, and inherent noise of PC-MRI velocity fields can have a notable effect on subsequent calculations, and should be investigated. In this work, we compare velocity fields obtained with 4D PC-MRI, computational fluid dynamics (CFD) and volumetric particle image velocimetry (PIV), using a patient-specific model of a basilar tip aneurysm. The same in vitro model is used for all three modalities and flow input parameters are controlled. In vivo, PC-MRI data was also acquired for this patient and used for comparison. Specifically, we investigate differences in the resulting velocity fields and biases in subsequent calculations. Further, we explore the effect these errors may have on assessment of the aneurysm progression and seek to develop corrective algorithms and other methodologies that can be used to improve the accuracy of hemodynamic analysis in clinical setting.

  8. The Global Detection Capability of the IMS Seismic Network in 2013 Inferred from Ambient Seismic Noise Measurements

    NASA Astrophysics Data System (ADS)

    Gaebler, P. J.; Ceranna, L.

    2016-12-01

    All nuclear explosions - on the Earth's surface, underground, underwater or in the atmosphere - are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). As part of this treaty, a verification regime was put into place to detect, locate and characterize nuclear explosion testings at any time, by anyone and everywhere on the Earth. The International Monitoring System (IMS) plays a key role in the verification regime of the CTBT. Out of the different monitoring techniques used in the IMS, the seismic waveform approach is the most effective technology for monitoring nuclear underground testing and to identify and characterize potential nuclear events. This study introduces a method of seismic threshold monitoring to assess an upper magnitude limit of a potential seismic event in a certain given geographical region. The method is based on ambient seismic background noise measurements at the individual IMS seismic stations as well as on global distance correction terms for body wave magnitudes, which are calculated using the seismic reflectivity method. From our investigations we conclude that a global detection threshold of around mb 4.0 can be achieved using only stations from the primary seismic network, a clear latitudinal dependence for the detection thresholdcan be observed between northern and southern hemisphere. Including the seismic stations being part of the auxiliary seismic IMS network results in a slight improvement of global detection capability. However, including wave arrivals from distances greater than 120 degrees, mainly PKP-wave arrivals, leads to a significant improvement in average global detection capability. In special this leads to an improvement of the detection threshold on the southern hemisphere. We further investigate the dependence of the detection capability on spatial (latitude and longitude) and temporal (time) parameters, as well as on parameters such as source type and percentage of operational IMS stations.

  9. Evaluation of Magnetic Diagnostics for MHD Equilibrium Reconstruction of LHD Discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sontag, Aaron C; Hanson, James D.; Lazerson, Sam

    2011-01-01

    Equilibrium reconstruction is the process of determining the set of parameters of an MHD equilibrium that minimize the difference between expected and experimentally observed signals. This is routinely performed in axisymmetric devices, such as tokamaks, and the reconstructed equilibrium solution is then the basis for analysis of stability and transport properties. The V3FIT code [1] has been developed to perform equilibrium reconstruction in cases where axisymmetry cannot be assumed, such as in stellarators. The present work is focused on using V3FIT to analyze plasmas in the Large Helical Device (LHD) [2], a superconducting, heliotron type device with over 25 MWmore » of heating power that is capable of achieving both high-beta ({approx}5%) and high density (>1 x 10{sup 21}/m{sup 3}). This high performance as well as the ability to drive tens of kiloamperes of toroidal plasma current leads to deviations in the equilibrium state from the vacuum flux surfaces. This initial study examines the effectiveness of using magnetic diagnostics as the observed signals in reconstructing experimental plasma parameters for LHD discharges. V3FIT uses the VMEC [3] 3D equilibrium solver to calculate an initial equilibrium solution with closed, nested flux surfaces based on user specified plasma parameters. This equilibrium solution is then used to calculate the expected signals for specified diagnostics. The differences between these expected signal values and the observed values provides a starting {chi}{sup 2} value. V3FIT then varies all of the fit parameters independently, calculating a new equilibrium and corresponding {chi}{sup 2} for each variation. A quasi-Newton algorithm [1] is used to find the path in parameter space that leads to a minimum in {chi}{sup 2}. Effective diagnostic signals must vary in a predictable manner with the variations of the plasma parameters and this signal variation must be of sufficient amplitude to be resolved from the signal noise. Signal effectiveness can be defined for a specific signal and specific reconstruction parameter as the dimensionless fractional reduction in the posterior parameter variance with respect to the signal variance. Here, {sigma}{sub i}{sup sig} is the variance of the ith signal and {sigma}{sub j}{sup param} param is the posterior variance of the jth fit parameter. The sum of all signal effectiveness values for a given reconstruction parameter is normalized to one. This quantity will be used to determine signal effectiveness for various reconstruction cases. The next section will examine the variation of the expected signals with changes in plasma pressure and the following section will show results for reconstructing model plasmas using these signals.« less

  10. Estimation of process capability indices from the results of limit gauge inspection of dimensional parameters in machining industry

    NASA Astrophysics Data System (ADS)

    Masterenko, Dmitry A.; Metel, Alexander S.

    2018-03-01

    The process capability indices Cp, Cpk are widely used in the modern quality management as statistical measures of the ability of a process to produce output X within specification limits. The customer's requirement to ensure Cp ≥ 1.33 is often applied in contracts. Capability indices estimates may be calculated with the estimates of the mean µ and the variability 6σ, and for it, the quality characteristic in a sample of pieces should be measured. It requires, in turn, using advanced measuring devices and well-qualified staff. From the other hand, quality inspection by attributes, fulfilled with limit gauges (go/no-go) is much simpler and has a higher performance, but it does not give the numerical values of the quality characteristic. The described method allows estimating the mean and the variability of the process on the basis of the results of limit gauge inspection with certain lower limit LCL and upper limit UCL, which separates the pieces into three groups: where X < LCL, number of pieces is n1, where LCL ≤ X < UCL, n2 pieces, and where X ≥ UCL, n3 pieces. So-called Pittman-type estimates, developed by the author, are functions of n1, n2, n3 and allow calculation of the estimated µ and σ. Thus, Cp and Cpk also may be estimated without precise measurements. The estimates can be used in quality inspection of lots of pieces as well as in monitoring and control of the manufacturing process. It is very important for improving quality of articles in machining industry through their tolerance.

  11. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-08-20

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  12. Theoretical interpretation of the nuclear structure of 88Se within the ACM and the QPM models.

    NASA Astrophysics Data System (ADS)

    Gratchev, I. N.; Thiamova, G.; Alexa, P.; Simpson, G. S.; Ramdhane, M.

    2018-02-01

    The four-parameter algebraic collective model (ACM) Hamiltonian is used to describe the nuclear structure of 88Se. It is shown that the ACM is capable of providing a reasonable description of the excitation energies and relative positions of the ground-state band and γ band. The most probable interpretation of the nuclear structure of 88Se is that of a transitional nucleus. The Quasiparticle-plus-Phonon Model (QPM) was also applied to describe the nuclear motion in 88Se. Preliminarily calculations show that the collectivity of second excited state {2}2+ is weak and that this state contains a strong two-quasiparticle component.

  13. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  14. Mission Analysis for High Specific Impulse Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Adams, Robert B.; Polsgrove, Tara; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    This paper describes trajectory calculations for high specific impulse engines. Specific impulses on the order of 10,000 to 100,000 sec are predicted in a variety of fusion powered propulsion systems. This paper and its companion paper seek to build on analyses in the literature to yield an analytical routine for determining time of flight and payload fraction to a predetermined destination. The companion paper will compare the results of this analysis to the trajectories determined by several trajectory codes. The major parameters that affect time of flight and payload fraction will be identified and their sensitivities quantified. A review of existing fusion propulsion concepts and their capabilities will also be tabulated.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Kevin J.; Parke, Stephen J.

    Quantum mechanical interactions between neutrinos and matter along the path of propagation, the Wolfenstein matter effect, are of particular importance for the upcoming long-baseline neutrino oscillation experiments, specifically the Deep Underground Neutrino Experiment (DUNE). Here, we explore specifically what about the matter density profile can be measured by DUNE, considering both the shape and normalization of the profile between the neutrinos' origin and detection. Additionally, we explore the capability of a perturbative method for calculating neutrino oscillation probabilities and whether this method is suitable for DUNE. We also briefly quantitatively explore the ability of DUNE to measure the Earth's mattermore » density, and the impact of performing this measurement on measuring standard neutrino oscillation parameters.« less

  16. Effects of the gaseous and liquid water content of the atmosphere on range delay and Doppler frequency

    NASA Technical Reports Server (NTRS)

    Flock, W. L.

    1981-01-01

    When high precision is required for range measurement on Earth space paths, it is necessary to correct as accurately as possible for excess range delays due to the dry air, water vapor, and liquid water content of the atmosphere. Calculations based on representative values of atmospheric parameters are useful for illustrating the order of magnitude of the expected delays. Range delay, time delay, and phase delay are simply and directly related. Doppler frequency variations or noise are proportional to the time rate of change of excess range delay. Tropospheric effects were examined as part of an overall consideration of the capability of precision two way ranging and Doppler systems.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zheng; Huang, Hongying; Yan, Jue

    We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less

  18. Nucleation theory in Langevin's approach and lifetime of a Brownian particle in potential wells.

    PubMed

    Alekseechkin, N V

    2008-07-14

    The multivariable theory of nucleation suggested by Alekseechkin [J. Chem. Phys. 124, 124512 (2006)] is further developed in the context of Langevin's approach. The use of this approach essentially enhances the capability of the nucleation theory, because it makes possible to consider the cases of small friction which are not taken into account by the classical Zel'dovich-Frenkel theory and its multivariable extensions. The procedure for the phenomenological determination of the nucleation parameters is described. Using the similarity of the Kramers model with that of nucleation, the lifetime of a Brownian particle in potential wells in various dimensionalities is calculated with the help of the expression for the steady state nucleation rate.

  19. Modeling electron emission and surface effects from diamond cathodes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Smithe, D.; Cary, J. R.; Ben-Zvi, I.; Rao, T.; Smedley, J.; Wang, E.

    2015-02-01

    We developed modeling capabilities, within the Vorpal particle-in-cell code, for three-dimensional simulations of surface effects and electron emission from semiconductor photocathodes. They include calculation of emission probabilities using general, piece-wise continuous, space-time dependent surface potentials, effective mass, and band bending field effects. We applied these models, in combination with previously implemented capabilities for modeling charge generation and transport in diamond, to investigate the emission dependence on applied electric field in the range from approximately 2 MV/m to 17 MV/m along the [100] direction. The simulation results were compared to experimental data. For the considered parameter regime, conservation of transverse electron momentum (in the plane of the emission surface) allows direct emission from only two (parallel to [100]) of the six equivalent lowest conduction band valleys. When the electron affinity χ is the only parameter varied in the simulations, the value χ = 0.31 eV leads to overall qualitative agreement with the probability of emission deduced from experiments. Including band bending in the simulations improves the agreement with the experimental data, particularly at low applied fields, but not significantly. Using surface potentials with different profiles further allows us to investigate the emission as a function of potential barrier height, width, and vacuum level position. However, adding surface patches with different levels of hydrogenation, modeled with position-dependent electron affinity, leads to the closest agreement with the experimental data.

  20. Sensitivity of NTCP parameter values against a change of dose calculation algorithm.

    PubMed

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.

  1. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-15

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less

  2. Sanitation marketing: A systematic review and theoretical critique using the capability approach.

    PubMed

    Barrington, D J; Sridharan, S; Shields, K F; Saunders, S G; Souter, R T; Bartram, J

    2017-12-01

    Sanitation is a human right that benefits health. As such, technical and behavioural interventions are widely implemented to increase the number of people using sanitation facilities. These include sanitation marketing interventions (SMIs), in which external support agencies (ESAs) use a hybrid of commercial and social marketing tools to increase supply of, and demand for, sanitation products and services. However, there is little critical discourse on SMIs, or independent rigorous analysis on whether they increase or reduce well-being. Most available information is from ESAs about their own SMI implementation. We systematically reviewed the grey and peer-reviewed literature on sanitation marketing, including qualitatively analysing and calculating descriptive statistics for the parameters measured, or intended to be measured, in publications reporting on 33 SMIs. Guided by the capability approach to development we identified that publications for most SMIs (n = 31, 94%) reported on commodities, whilst fewer reported on parameters related to impacts on well-being (i.e., functionings, n = 22, 67%, and capabilities, n = 20, 61%). When evaluating future SMIs, it may be useful to develop a list of contextualised well-being indicators for the particular SMI's location, taking into account local cultural norms, with this list ideally co-produced with local stakeholders. We identified two common practices in SMIs that can reduce well-being and widen well-being inequalities; namely, the promotion of conspicuous consumption and assaults on dignity, and we discuss the mechanisms by which such impacts occur. We recommend that ESAs understand sanitation marketing's potential to reduce well-being and design SMIs to minimize such detrimental impacts. Throughout the implementation phase ESAs should continuously monitor for well-being impacts and adapt practices to optimise well-being outcomes for all involved. Copyright © 2017. Published by Elsevier Ltd.

  3. Measurement of regional compliance using 4DCT images for assessment of radiation treatment1

    PubMed Central

    Zhong, Hualiang; Jin, Jian-yue; Ajlouni, Munther; Movsas, Benjamin; Chetty, Indrin J.

    2011-01-01

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaled to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to the end of exhalation were denoted by Pi, i=1,…,5, respectively. The average lung volume variation relative to the reference phase (P5) was reduced from 18% at P1 to 4.8% at P4. The average stress at phase Pi was 1.42, 1.34, 0.74, and 0.28 kPa, and the average regional compliance was 0.19, 0.20, 0.20, and 0.24 for i=1,…,4, respectively. For the other five patients, their average Rv value at the end-inhalation phase was 21.1%, 19.6%, 22.4%, 22.5%, and 18.8%, respectively, and the regional compliance averaged over all six patients is 0.2. For elasticity parameters chosen from the potential parameter range, the resultant stress distributions were found to be similar to each other with Pearson correlation coefficients greater than 0.81. Conclusions: A 4DCT-based mechanical model has been developed to calculate the ventilation and stress images of the lung. The resultant regional compliance represents the lung’s elasticity property and is potentially useful in correlating regions of lung damage with radiation dose following a course of radiation therapy. PMID:21520868

  4. Measurement of regional compliance using 4DCT images for assessment of radiation treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Jin Jianyue; Ajlouni, Munther

    2011-03-15

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaledmore » to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to the end of exhalation were denoted by P{sub i}, i=1,...,5, respectively. The average lung volume variation relative to the reference phase (P{sub 5}) was reduced from 18% at P{sub 1} to 4.8% at P{sub 4}. The average stress at phase P{sub i} was 1.42, 1.34, 0.74, and 0.28 kPa, and the average regional compliance was 0.19, 0.20, 0.20, and 0.24 for i=1,...,4, respectively. For the other five patients, their average R{sub v} value at the end-inhalation phase was 21.1%, 19.6%, 22.4%, 22.5%, and 18.8%, respectively, and the regional compliance averaged over all six patients is 0.2. For elasticity parameters chosen from the potential parameter range, the resultant stress distributions were found to be similar to each other with Pearson correlation coefficients greater than 0.81. Conclusions: A 4DCT-based mechanical model has been developed to calculate the ventilation and stress images of the lung. The resultant regional compliance represents the lung's elasticity property and is potentially useful in correlating regions of lung damage with radiation dose following a course of radiation therapy.« less

  5. Hot zero power reactor calculations using the Insilico code

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...

    2016-03-18

    In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.

  6. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  7. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  8. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  9. Efficient full decay inversion of MRS data with a stretched-exponential approximation of the ? distribution

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.

    2012-08-01

    We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.

  10. Predicting p Ka values from EEM atomic charges

    PubMed Central

    2013-01-01

    The acid dissociation constant p Ka is a very important molecular property, and there is a strong interest in the development of reliable and fast methods for p Ka prediction. We have evaluated the p Ka prediction capabilities of QSPR models based on empirical atomic charges calculated by the Electronegativity Equalization Method (EEM). Specifically, we collected 18 EEM parameter sets created for 8 different quantum mechanical (QM) charge calculation schemes. Afterwards, we prepared a training set of 74 substituted phenols. Additionally, for each molecule we generated its dissociated form by removing the phenolic hydrogen. For all the molecules in the training set, we then calculated EEM charges using the 18 parameter sets, and the QM charges using the 8 above mentioned charge calculation schemes. For each type of QM and EEM charges, we created one QSPR model employing charges from the non-dissociated molecules (three descriptor QSPR models), and one QSPR model based on charges from both dissociated and non-dissociated molecules (QSPR models with five descriptors). Afterwards, we calculated the quality criteria and evaluated all the QSPR models obtained. We found that QSPR models employing the EEM charges proved as a good approach for the prediction of p Ka (63% of these models had R2 > 0.9, while the best had R2 = 0.924). As expected, QM QSPR models provided more accurate p Ka predictions than the EEM QSPR models but the differences were not significant. Furthermore, a big advantage of the EEM QSPR models is that their descriptors (i.e., EEM atomic charges) can be calculated markedly faster than the QM charge descriptors. Moreover, we found that the EEM QSPR models are not so strongly influenced by the selection of the charge calculation approach as the QM QSPR models. The robustness of the EEM QSPR models was subsequently confirmed by cross-validation. The applicability of EEM QSPR models for other chemical classes was illustrated by a case study focused on carboxylic acids. In summary, EEM QSPR models constitute a fast and accurate p Ka prediction approach that can be used in virtual screening. PMID:23574978

  11. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  12. Influence of magnetite, ilmenite and boron carbide on radiation attenuation of polyester composites

    NASA Astrophysics Data System (ADS)

    El-Sarraf, M. A.; El-Sayed Abdo, A.

    2013-07-01

    This work is concerned with studying polyester/ magnetite CUP/Mag (ρ=2.75 g cm-3) and polyester/ ilmenite CUP/Ilm (ρ=2.7 g cm-3) composites for shielding of medical facilities, laboratory hot cells and for various purposes. Mechanical and physical properties such as compressive, flexural and impact strengths, as well as, a.c. electrical conductivity, specific heat, water absorption and porosity have been performed to evaluate the composite capabilities for radiation shielding. A collimated beam from fission 252Cf (100 µg) neutron source and neutron-gamma spectrometer with stilbene scintillator based on the zero cross over method and pulse shape discrimination (P.S.D.) technique have been used to measure neutron and gamma ray spectra. Fluxes of thermal neutrons have been measured using the BF3 detector and thermal neutron detection system. The attenuation parameters, namely macroscopic effective removal cross-section ΣR, total attenuation coefficient µ and macroscopic cross-section Σ of fast neutrons, gamma rays and thermal neutrons respectively have been evaluated. Theoretical calculations using MCNP-4C2 code was used to calculate ΣR,μ and Σ. Also, MERCSF-N program was used to calculate macroscopic effective removal cross-section ΣR. Measured and calculated results were compared and reasonable agreement was found.

  13. User's Manual: Routines for Radiative Heat Transfer and Thermometry

    NASA Technical Reports Server (NTRS)

    Risch, Timothy K.

    2016-01-01

    Determining the intensity and spectral distribution of radiation emanating from a heated surface has applications in many areas of science and engineering. Areas of research in which the quantification of spectral radiation is used routinely include thermal radiation heat transfer, infrared signature analysis, and radiation thermometry. In the analysis of radiation, it is helpful to be able to predict the radiative intensity and the spectral distribution of the emitted energy. Presented in this report is a set of routines written in Microsoft Visual Basic for Applications (VBA) (Microsoft Corporation, Redmond, Washington) and incorporating functions specific to Microsoft Excel (Microsoft Corporation, Redmond, Washington) that are useful for predicting the radiative behavior of heated surfaces. These routines include functions for calculating quantities of primary importance to engineers and scientists. In addition, the routines also provide the capability to use such information to determine surface temperatures from spectral intensities and for calculating the sensitivity of the surface temperature measurements to unknowns in the input parameters.

  14. Optical absorption in degenerately doped semiconductors: Mott transition or Mahan excitons?

    NASA Astrophysics Data System (ADS)

    Schleife, André.; Rödl, Claudia; Hannewald, Karsten; Bechstedt, Friedhelm

    2012-02-01

    In the exploration of material properties, parameter-free calculations are a modern, sophisticated complement to cutting-edge experimental techniques. Ab-initio calculations are now capable of providing a deep understanding of the interesting physics underlying the electronic structure and optical absorption, e.g., of the transparent conductive oxides. Due to electron doping, these materials are conductive even though they have wide fundamental band gaps. The degenerate electron gas in the lowest conduction-band states drastically modifies the Coulomb interaction between the electrons and, hence, the optical properties close to the absorption edge. We describe these effects by developing an ab-initio technique which captures also the Pauli blocking and the Fermi-edge singularity at the optical absorption onset, that occur in addition to quasiparticle and excitonic effects. We answer the question whether free carriers induce an excitonic Mott transition or trigger the evolution of Wannier-Mott excitons into Mahan excitons. The prototypical n-type zinc oxide is studied as an example.

  15. Input-output model for MACCS nuclear accident impacts estimation¹

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less

  16. Interior and exterior ballistics coupled optimization with constraints of attitude control and mechanical-thermal conditions

    NASA Astrophysics Data System (ADS)

    Liang, Xin-xin; Zhang, Nai-min; Zhang, Yan

    2016-07-01

    For solid launch vehicle performance promotion, a modeling method of interior and exterior ballistics associated optimization with constraints of attitude control and mechanical-thermal condition is proposed. Firstly, the interior and external ballistic models of the solid launch vehicle are established, and the attitude control model of the high wind area and the stage of the separation is presented, and the load calculation model of the drag reduction device is presented, and thermal condition calculation model of flight is presented. Secondly, the optimization model is established to optimize the range, which has internal and external ballistic design parameters as variables selected by sensitivity analysis, and has attitude control and mechanical-thermal conditions as constraints. Finally, the method is applied to the optimal design of a three stage solid launch vehicle simulation with differential evolution algorithm. Simulation results are shown that range capability is improved by 10.8%, and both attitude control and mechanical-thermal conditions are satisfied.

  17. Numerical prediction of transitional features of turbulent forced gas flows in circular tubes with strong heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezato, K.; Shehata, A.M.; Kunugi, T.

    1999-08-01

    In order to treat strongly heated, forced gas flows at low Reynolds numbers in vertical circular tubes, the {kappa}-{epsilon} turbulence model of Abe, Kondoh, and Nagano (1994), developed for forced turbulent flow between parallel plates with the constant property idealization, has been successfully applied. For thermal energy transport, the turbulent Prandtl number model of Kays and Crawford (1993) was adopted. The capability to handle these flows was assessed via calculations at the conditions of experiments by Shehata (1984), ranging from essentially turbulent to laminarizing due to the heating. Predictions forecast the development of turbulent transport quantities, Reynolds stress, and turbulentmore » heat flux, as well as turbulent viscosity and turbulent kinetic energy. Overall agreement between the calculations and the measured velocity and temperature distributions is good, establishing confidence in the values of the forecast turbulence quantities--and the model which produced them. Most importantly, the model yields predictions which compare well with the measured wall heat transfer parameters and the pressure drop.« less

  18. Ab initio random structure searching of organic molecular solids: assessment and validation against experimental data.

    PubMed

    Zilka, Miri; Dudenko, Dmytro V; Hughes, Colan E; Williams, P Andrew; Sturniolo, Simone; Franks, W Trent; Pickard, Chris J; Yates, Jonathan R; Harris, Kenneth D M; Brown, Steven P

    2017-10-04

    This paper explores the capability of using the DFT-D ab initio random structure searching (AIRSS) method to generate crystal structures of organic molecular materials, focusing on a system (m-aminobenzoic acid; m-ABA) that is known from experimental studies to exhibit abundant polymorphism. Within the structural constraints selected for the AIRSS calculations (specifically, centrosymmetric structures with Z = 4 for zwitterionic m-ABA molecules), the method is shown to successfully generate the two known polymorphs of m-ABA (form III and form IV) that have these structural features. We highlight various issues that are encountered in comparing crystal structures generated by AIRSS to experimental powder X-ray diffraction (XRD) data and solid-state magic-angle spinning (MAS) NMR data, demonstrating successful fitting for some of the lowest energy structures from the AIRSS calculations against experimental low-temperature powder XRD data for known polymorphs of m-ABA, and showing that comparison of computed and experimental solid-state NMR parameters allows different hydrogen-bonding motifs to be discriminated.

  19. Intelligent Flow Friction Estimation.

    PubMed

    Brkić, Dejan; Ćojbašić, Žarko

    2016-01-01

    Nowadays, the Colebrook equation is used as a mostly accepted relation for the calculation of fluid flow friction factor. However, the Colebrook equation is implicit with respect to the friction factor (λ). In the present study, a noniterative approach using Artificial Neural Network (ANN) was developed to calculate the friction factor. To configure the ANN model, the input parameters of the Reynolds Number (Re) and the relative roughness of pipe (ε/D) were transformed to logarithmic scales. The 90,000 sets of data were fed to the ANN model involving three layers: input, hidden, and output layers with, 2, 50, and 1 neurons, respectively. This configuration was capable of predicting the values of friction factor in the Colebrook equation for any given values of the Reynolds number (Re) and the relative roughness (ε/D) ranging between 5000 and 10(8) and between 10(-7) and 0.1, respectively. The proposed ANN demonstrates the relative error up to 0.07% which had the high accuracy compared with the vast majority of the precise explicit approximations of the Colebrook equation.

  20. First principles Peierls-Boltzmann phonon thermal transport: A topical review

    DOE PAGES

    Lindsay, Lucas

    2016-08-05

    The advent of coupled thermal transport calculations with interatomic forces derived from density functional theory has ushered in a new era of fundamental microscopic insight into lattice thermal conductivity. Subsequently, significant new understanding of phonon transport behavior has been developed with these methods, and because they are parameter free and successfully benchmarked against a variety of systems, they also provide reliable predictions of thermal transport in systems for which little is known. This topical review will describe the foundation from which first principles Peierls-Boltzmann transport equation methods have been developed, and briefly describe important necessary ingredients for accurate calculations. Samplemore » highlights of reported work will be presented to illustrate the capabilities and challenges of these techniques, and to demonstrate the suite of tools available, with an emphasis on thermal transport in micro- and nano-scale systems. In conclusion, future challenges and opportunities will be discussed, drawing attention to prospects for methods development and applications.« less

  1. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.

    PubMed

    Hedin, Emma; Bäck, Anna

    2013-09-06

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.

  2. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  3. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  4. LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Cook, K. H.; Delgado, F.; Miller, M.; Saha, A.; Allsman, R.; Pinto, P.; Gee, P. A.

    2005-12-01

    We have developed an operations simulator for LSST and used it to explore design and operations parameter space for this large etendue telescope and its ten year survey mission. The design is modular, with separate science programs coded in separate modules. There is a sophisticated telescope module with all motions parametrized for ease of testing different telescope capabilities, e.g. effect of acceleration capabilities of various motors on science output. Sky brightness is calculated as a function of moon phase and separation. A sophisticated exposure time calculator has been developed for LSST which is being incorporated into the simulator to allow specification of S/N requirements. All important parameters for the telescope, the site and the science programs are easily accessible in configuration files. Seeing and cloud data from the three candidate LSST sites are used for our simulations. The simulator has two broad categories of science proposals: sky coverage and transient events. Sky coverage proposals base their observing priorities on a required number of observations for each field in a particular filter with specified conditions (maximum seeing, sky brightness, etc) and one is used for a weak lensing investigation. Transient proposals are highly configurable. A transient proposal can require sequential, multiple exposures in various filters with a specified sequence of filters, and require a particular cadence for multiple revisits to complete an observation sequence. Each science proposal ranks potential observations based upon the internal logic of that proposal. We present the results of a variety of mixed science program observing simulations, showing how varied programs can be carried out simultaneously, with many observations serving multiple science goals. The simulator has shown that LSST can carry out its multiple missions under a variety of conditions. KHC's work was performed under the auspices of the US DOE, NNSA by the Univ. of California, LLNL under contract No. W-7405-Eng-48.

  5. TBIEM3D: A Computer Program for Predicting Ducted Fan Engine Noise. Version 1.1

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.

    1997-01-01

    This document describes the usage of the ducted fan noise prediction program TBIEM3D (Thin duct - Boundary Integral Equation Method - 3 Dimensional). A scattering approach is adopted in which the acoustic pressure field is split into known incident and unknown scattered parts. The scattering of fan-generated noise by a finite length circular cylinder in a uniform flow field is considered. The fan noise is modeled by a collection of spinning point thrust dipoles. The program, based on a Boundary Integral Equation Method (BIEM), calculates circumferential modal coefficients of the acoustic pressure at user-specified field locations. The duct interior can be of the hard wall type or lined. The duct liner is axisymmetric, locally reactive, and can be uniform or axially segmented. TBIEM3D is written in the FORTRAN programming language. Input to TBIEM3D is minimal and consists of geometric and kinematic parameters. Discretization and numerical parameters are determined automatically by the code. Several examples are presented to demonstrate TBIEM3D capabilities.

  6. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  7. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-07-15

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less

  8. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2008-07-01

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).

  9. Experimental and theoretical study of Co sorption in clay montmorillonites

    NASA Astrophysics Data System (ADS)

    Gil Rebaza, A. V.; Montes, M. L.; Taylor, M. A.; Errico, L. A.; Alonso, R. E.

    2018-03-01

    Montmorillonite (MMT) clays are 2:1 layered structures which in natural state may allocate different hydrated cations such as M-nH2O (M = Na, Ca, Fe, etc) in its interlayer space. Depending on the capability for ion sorption, these materials are interesting for environmental remediation. In this work we experimentally study the Co sorption in a natural Na-MMT using UV-visible spectrometry and XRD on semi-oriented samples, and then analyze the sorption ability of this clay by means of ab initio calculation performed on pristine MMT. The structural properties of Na-MMT and Co-adsorbed MMT, and the hyperfine parameters at different atomic sites were analyzed and compared with the experimental ones for the first, and for the case of the hyperfine parameters, presented for the first time for the last. The theoretical predictions based on total energy considerations confirm that Co incorporation replacing Na is energetically favorable. Also, the basal spacing d001 experimentally obtained is well reproduced.

  10. Communication: Appearance of undershoots in start-up shear: Experimental findings captured by tumbling-snake dynamics

    NASA Astrophysics Data System (ADS)

    Stephanou, Pavlos S.; Schweizer, Thomas; Kröger, Martin

    2017-04-01

    Our experimental data unambiguously show (i) a damping behavior (the appearance of an undershoot following the overshoot) in the transient shear viscosity of a concentrated polymeric solution, and (ii) the absence of a corresponding behavior in the transient normal stress coefficients. Both trends are shown to be quantitatively captured by the bead-link chain kinetic theory for concentrated polymer solutions and entangled polymer melts proposed by Curtiss and Bird, supplemented by a non-constant link tension coefficient that we relate to the nematic order parameter. The observed phenomena are attributed to the tumbling behavior of the links, triggered by rotational fluctuations, on top of reptation. Using model parameters deduced from stationary data, we calculate the transient behavior of the stress tensor for this "tumbling-snake" model after startup of shear flow efficiently via simple Brownian dynamics. The unaltered method is capable of handling arbitrary homogeneous flows and has the promising capacity to improve our understanding of the transient behavior of concentrated polymer solutions.

  11. Interpreting sea surface slicks on the basis of the normalized radar cross-section model using RADARSAT-2 copolarization dual-channel SAR images

    NASA Astrophysics Data System (ADS)

    Ivonin, D. V.; Skrunes, S.; Brekke, C.; Ivanov, A. Yu.

    2016-03-01

    A simple automatic multipolarization technique for discrimination of main types of thin oil films (of thickness less than the radio wave skin depth) from natural ones is proposed. It is based on a new multipolarization parameter related to the ratio between the damping in the slick of specially normalized resonant and nonresonant signals calculated using the normalized radar cross-section model proposed by Kudryavtsev et al. (2003a). The technique is tested on RADARSAT-2 copolarization (VV/HH) synthetic aperture radar images of slicks of a priori known provenance (mineral oils, e.g., emulsion and crude oil, and plant oil served to model a natural slick) released during annual oil-on-water exercises in the North Sea in 2011 and 2012. It has been shown that the suggested multipolarization parameter gives new capabilities in interpreting slicks visible on synthetic aperture radar images while allowing discrimination between mineral oil and plant oil slicks.

  12. Design and characterization of planar capacitive imaging probe based on the measurement sensitivity distribution

    NASA Astrophysics Data System (ADS)

    Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.

    2013-01-01

    Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.

  13. Estimating the Depth of Stratigraphic Units from Marine Seismic Profiles Using Nonstationary Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chihi, Hayet; Galli, Alain; Ravenne, Christian

    2000-03-15

    The object of this study is to build a three-dimensional (3D) geometric model of the stratigraphic units of the margin of the Rhone River on the basis of geophysical investigations by a network of seismic profiles at sea. The geometry of these units is described by depth charts of each surface identified by seismic profiling, which is done by geostatistics. The modeling starts by a statistical analysis by which we determine the parameters that enable us to calculate the variograms of the identified surfaces. After having determined the statistical parameters, we calculate the variograms of the variable Depth. By analyzingmore » the behavior of the variogram we then can deduce whether the situation is stationary and if the variable has an anisotropic behavior. We tried the following two nonstationary methods to obtain our estimates: (a) The method of universal kriging if the underlying variogram was directly accessible. (b) The method of increments if the underlying variogram was not directly accessible. After having modeled the variograms of the increments and of the variable itself, we calculated the surfaces by kriging the variable Depth on a small-mesh estimation grid. The two methods then are compared and their respective advantages and disadvantages are discussed, as well as their fields of application. These methods are capable of being used widely in earth sciences for automatic mapping of geometric surfaces or for variables such as a piezometric surface or a concentration, which are not 'stationary,' that is, essentially, possess a gradient or a tendency to develop systematically in space.« less

  14. Investigating SWOT's capabilities to detect meso and submesoscale eddies in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Gomez-Navarro, Laura; Pascual, Ananda; Fablet, Ronan; Mason, Evan

    2017-04-01

    The primary oceanographic objective of the future Surface Water Ocean Topography (SWOT) altimetric satellite is to characterize the mesoscale and submesoscale ocean circulation. The aim of this study is to assess the capabilities of SWOT to resolve the meso and submesoscale in the western Mediterranean. With ROMS model data as inputs for the SWOT simulator, pseudo-SWOT data were generated. These data were compared with the original ROMS model data and ADT data from present day altimetric satellites to assess the temporal and spatial resolution of SWOT in the western Mediterranean. We then addressed the removal of the satellite's noise in the pseudo-SWOT data using a Laplacian diffusion. We investigated different parameters of the filter by looking at their impact on the spatial spectra and RMSEs calculated from the simulator outputs. To further assess the satellites capabilities, we derived absolute geostrophic velocities and relative vorticity. Our numerical experiments show that the noise patterns affect the spectral content of the pseudo-SWOT fields below 30 km. The Laplacian diffusion improves the recovery of the spectral signature of the altimetric field, especially down to 20 km. With the help of this filter, we manage to observe small scale oceanic features in pseudo-SWOT data, and in its derived variables.

  15. Clinical Parameters and Tools for Home-Based Assessment of Parkinson's Disease: Results from a Delphi study.

    PubMed

    Ferreira, Joaquim J; Santos, Ana T; Domingos, Josefa; Matthews, Helen; Isaacs, Tom; Duffen, Joy; Al-Jawad, Ahmed; Larsen, Frank; Artur Serrano, J; Weber, Peter; Thoms, Andrea; Sollinger, Stefan; Graessner, Holm; Maetzler, Walter

    2015-01-01

    Parkinson's disease (PD) is a neurodegenerative disorder with fluctuating symptoms. To aid the development of a system to evaluate people with PD (PwP) at home (SENSE-PARK system) there was a need to define parameters and tools to be applied in the assessment of 6 domains: gait, bradykinesia/hypokinesia, tremor, sleep, balance and cognition. To identify relevant parameters and assessment tools of the 6 domains, from the perspective of PwP, caregivers and movement disorders specialists. A 2-round Delphi study was conducted to select a core of parameters and assessment tools to be applied. This process included PwP, caregivers and movement disorders specialists. Two hundred and thirty-three PwP, caregivers and physicians completed the first round questionnaire, and 50 the second. Results allowed the identification of parameters and assessment tools to be added to the SENSE-PARK system. The most consensual parameters were: Falls and Near Falls; Capability to Perform Activities of Daily Living; Interference with Activities of Daily Living; Capability to Process Tasks; and Capability to Recall and Retrieve Information. The most cited assessment strategies included Walkers; the Evaluation of Performance Doing Fine Motor Movements; Capability to Eat; Assessment of Sleep Quality; Identification of Circumstances and Triggers for Loose of Balance and Memory Assessment. An agreed set of measuring parameters, tests, tools and devices was achieved to be part of a system to evaluate PwP at home. A pattern of different perspectives was identified for each stakeholder.

  16. Quantum Chemical Calculations of Torsionally Mediated Hyperfine Splittings in States of E Symmetry of Acetaldehyde (CH_{3}CHO)

    NASA Astrophysics Data System (ADS)

    Xu, Li-Hong; Reid, Elias M.; Guislain, Bradley; Hougen, Jon T.; Alekseev, E. A.; Krapivin, Igor

    2017-06-01

    Hyperfine splittings in methanol have been revisited in three recent publications. (i) Coudert et al. [JCP 143 (2015) 044304] published an analysis of splittings observed in the low-J range. They calculated 32 spin-rotation, 32 spin-spin, and 16 spin-torsion hyperfine constants using the ACES2 package. Three of these constants were adjusted to fit hyperfine patterns for 12 transitions. (ii) Three present authors and collaborators [JCP 145 (2016) 024307] analyzed medium to high-J experimental Lamb-dip measurements in methanol and presented a theoretical spin-rotation explanation that was based on torsionally mediated spin-rotation hyperfine operators. These contain, in addition to the usual nuclear spin and overall rotational operators, factors in the torsional angle α of the form {e^{plusmn;{inα}}}. Such operators have non-zero matrix elements between the two components of a torsion-rotation ^{tr}E state, but have zero matrix elements within a ^{tr}A state. More than 55 hyperfine splittings were successfully fitted using three parameters and the fitted values agree well with ab initio values obtained in (i). (iii) Lankhaar et al. [JCP 145 (2016) 244301] published a reanalysis of the data set from (i), using CFOUR recalculated hyperfine constants based on their rederivation of the relevant expressions. They explain why their choice of fixed and floated parameters leads to numerical values for all parameters that seem to be more physical than those in (i). The results in (ii) raise the question of whether large torsionally-mediated spin-rotation splittings will occur in other methyl-rotor-containing molecules. This abstract presents ab initio calculations of torsionally mediated hyperfine splittings in the E states of acetaldehyde using the same three operators as in (ii) and spin-rotation constants computed by Gaussian09. We explored the first 13 K states for J from 10 to 40 and ν_{t} = 0, 1, and 2. Our calculations indicate that hyperfine splittings in CH_{3}CHO are just below current measurement capability. This conclusion is confirmed by available experimental measurements.

  17. Creation and Characterization of an Ultrasound and CT Phantom for Non-invasive Ultrasound Thermometry Calibration

    PubMed Central

    Lai, Chun-Yen; Kruse, Dustin E.; Ferrara, Katherine W.; Caskey, Charles F.

    2014-01-01

    Ultrasound thermometry provides noninvasive two-dimensional (2-D) temperature monitoring, and in this paper, we have investigated the use of computed tomography (CT) radiodensity to characterize tissues to improve the accuracy of ultrasound thermometry. Agarose-based tissue-mimicking phantoms were created with glyceryl trioleate (a fat-mimicking material) concentration of 0, 10, 20, 30, 40, and 50%. The speed of sound (SOS) of the phantoms was measured over a temperature range of 22.1–41.1°C. CT images of the phantoms were acquired by a clinical dedicated breast CT scanner, followed by calculation of the Hounsfield units (HU). The phantom was heated with a therapeutic acoustic pulse (1.54 MHz), while RF data were acquired with a 10-MHz linear-array transducer. 2-D speckle tracking was used to calculate the thermal strain offline. The tissue dependent thermal strain parameter required for ultrasound thermometry was analyzed and correlated with CT radiodensity, followed by validation of the temperature prediction. Results showed that the change in SOS with the temperature increase was opposite in sign between the 0–10% and 20–50% trioleate phantoms. The inverse of the tissue dependent thermal strain parameter of the phantoms was correlated with the CT radiodensity (R2 = 0.99). A blinded ultrasound thermometry study on phantoms with a trioleate range of 5–35% demonstrated the capability to estimate the tissue dependent thermal strain parameter and estimate temperature with error less than ~1°C. In conclusion, CT radiodensity may provide a method for improving ultrasound thermometry in heterogeneous tissues. PMID:24107918

  18. A Thermodynamic Approach for Modeling H2O-CO2 Solubility in Alkali-rich Mafic Magmas at Mid-crustal Pressures

    NASA Astrophysics Data System (ADS)

    Allison, C. M.; Roggensack, K.; Clarke, A. B.

    2017-12-01

    Volatile solubility in magmas is dependent on several factors, including composition and pressure. Mafic (basaltic) magmas with high concentrations of alkali elements (Na and K) are capable of dissolving larger quantities of H2O and CO2 than low-alkali basalt. The exsolution of abundant gases dissolved in alkali-rich mafic magmas can contribute to large explosive eruptions. Existing volatile solubility models for alkali-rich mafic magmas are well calibrated below 200 MPa, but at greater pressures the experimental data is sparse. To allow for accurate interpretation of mafic magmatic systems at higher pressures, we conducted a set of mixed H2O-CO2 volatile solubility experiments between 400 and 600 MPa at 1200 °C in six mafic compositions with variable alkali contents. Compositions include magmas from volcanoes in Italy, Antarctica, and Arizona. Results from our experiments indicate that existing volatile solubility models for alkali-rich mafic magmas, if extrapolated beyond their calibrated range, over-predict CO2 solubility at mid-crustal pressures. Physically, these results suggest that volatile exsolution can occur at deeper levels than what can be resolved from the lower-pressure experimental data. Existing thermodynamic models used to calculate volatile solubility at different pressures require two experimentally derived parameters. These parameters represent the partial molar volume of the condensed volatile species in the melt and its equilibrium constant, both calculated at a standard temperature and pressure. We derived these parameters for each studied composition and the corresponding thermodynamic model shows good agreement with the CO2 solubility data of the experiments. A general alkali basalt solubility model was also constructed by establishing a relationship between magma composition and the thermodynamic parameters. We utilize cation fractions from our six compositions along with four compositions from the experimental literature in a linear regression to generate this compositional relationship. Our revised general model provides a new framework to interpret volcanic data, yielding greater depths for melt inclusion entrapment than previously calculated using other models, and it can be applied to mafic magma compositions for which no experimental data is available.

  19. Spectroscopic detection, characterization and dynamics of free radicals relevant to combustion processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Terry

    2015-06-04

    Combustion chemistry is enormously complex. The chemical mechanisms involve a multitude of elementary reaction steps and a comparable number of reactive intermediates, many of which are free radicals. Computer simulations based upon these mechanisms are limited by the validity of the mechanisms and the parameters characterizing the properties of the intermediates and their reactivity. Spectroscopy can provide data for sensitive and selective diagnostics to follow their reactions. Spectroscopic analysis also provides detailed parameters characterizing the properties of these intermediates. These parameters serve as experimental gold standards to benchmark predictions of these properties from large-scale, electronic structure calculations. This work hasmore » demonstrated the unique capabilities of near-infrared cavity ringdown spectroscopy (NIR CRDS) to identify, characterize and monitor intermediates of key importance in complex chemical reactions. Our studies have focussed on the large family of organic peroxy radicals which are arguably themost important intermediates in combustion chemistry and many other reactions involving the oxidation of organic compounds. Our spectroscopic studies have shown that the NIR Ã - ˜X electronic spectra of the peroxy radicals allows one to differentiate among chemical species in the organic peroxy family and also determine their isomeric and conformic structure in many cases. We have clearly demonstrated this capability on saturated and unsaturated peroxy radicals and β-hydroxy peroxy radicals. In addition we have developed a unique dual wavelength CRDS apparatus specifically for the purpose of measuring absolute absorption cross section and following the reaction of chemical intermediates. The utility of the apparatus has been demonstrated by measuring the cross-section and self-reaction rate constant for ethyl peroxy.« less

  20. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  1. Using continuous underway isotope measurements to map water residence time in hydrodynamically complex tidal environments

    USGS Publications Warehouse

    Downing, Bryan D.; Bergamaschi, Brian; Kendall, Carol; Kraus, Tamara; Dennis, Kate J.; Carter, Jeffery A.; von Dessonneck, Travis

    2016-01-01

    Stable isotopes present in water (δ2H, δ18O) have been used extensively to evaluate hydrological processes on the basis of parameters such as evaporation, precipitation, mixing, and residence time. In estuarine aquatic habitats, residence time (τ) is a major driver of biogeochemical processes, affecting trophic subsidies and conditions in fish-spawning habitats. But τ is highly variable in estuaries, owing to constant changes in river inflows, tides, wind, and water height, all of which combine to affect τ in unpredictable ways. It recently became feasible to measure δ2H and δ18O continuously, at a high sampling frequency (1 Hz), using diffusion sample introduction into a cavity ring-down spectrometer. To better understand the relationship of τ to biogeochemical processes in a dynamic estuarine system, we continuously measured δ2H and δ18O, nitrate and water quality parameters, on board a small, high-speed boat (5 to >10 m s–1) fitted with a hull-mounted underwater intake. We then calculated τ as is classically done using the isotopic signals of evaporation. The result was high-resolution (∼10 m) maps of residence time, nitrate, and other parameters that showed strong spatial gradients corresponding to geomorphic attributes of the different channels in the area. The mean measured value of τ was 30.5 d, with a range of 0–50 d. We used the measured spatial gradients in both τ and nitrate to calculate whole-ecosystem uptake rates, and the values ranged from 0.006 to 0.039 d–1. The capability to measure residence time over single tidal cycles in estuaries will be useful for evaluating and further understanding drivers of phytoplankton abundance, resolving differences attributable to mixing and water sources, explicitly calculating biogeochemical rates, and exploring the complex linkages among time-dependent biogeochemical processes in hydrodynamically complex environments such as estuaries.

  2. Validation of SMAP Radar Vegetation Data Cubes from Agricultural Field Measurements

    NASA Astrophysics Data System (ADS)

    Tsang, L.; Xu, X.; Liao, T.; Kim, S.; Njoku, E. G.

    2012-12-01

    The NASA Soil Moisture Active/Passive (SMAP) Mission will be launched in October 2014. The objective of the SMAP mission is to provide global measurements of soil moisture and its freeze/thaw state. These measurements will be used to enhance understanding of processes that link the water, energy and carbon cycles, and to extend the capabilities of weather and climate prediction models. In the active algorithm, the retrieval is performed based on the backscattering data cube, which are characterized by two surface parameters, which are soil moisture and soil surface rms height, and one vegetation parameter, the vegetation water content. We have developed a physical-based forward scattering model to generate the data cube for agricultural fields. To represent the agricultural crops, we include a layer of cylinders and disks on top of the rough surface. The scattering cross section of the vegetation layer and its interaction with the underground soil surface were calculated by the distorted Born approximation, which give explicitly three scattering mechanisms. A) The direct volume scattering B) The double bounce effect as, and C) The double bouncing effects. The direct volume scattering is calculated by using the Body of Revolution code. The double bounce effects, exhibited by the interaction of rough surface with the vegetation layer is considered by modifying the rough surface reflectivity using the coherent wave as computed by Numerical solution of Maxwell equations of 3 Dimensional simulations (NMM3D) of bare soil scattering. The rough surface scattering of the soil was calculated by NMM3D. We have compared the physical scattering models with field measurements. In the field campaign, the measurements were made on soil moisture, rough surface rms heights and vegetation water content as well as geometric parameters of vegetation. The three main crops lands are grassland, cornfield and soybean fields. The corresponding data cubes are validated using SGP99, SMEX02 and SMEX 08 field experiments.

  3. Challenges in computational evaluation of redox and magnetic properties of Fe-based sulfate cathode materials of Li- and Na-ion batteries

    NASA Astrophysics Data System (ADS)

    Shishkin, Maxim; Sato, Hirofumi

    2017-06-01

    Several Fe-based sulfates have been proposed recently as cathode materials characterized by a high average operating voltage (i.e. Li2Fe(SO4)2 and Na2Fe2(SO4)3) or low fabrication temperature (e.g. Na2Fe(SO4)2·2H2O)). In this work, we apply three methods to evaluate the redox potentials and magnetic properties of these materials: (1) local density functional theory (DFT) in Perdew-Burke-Ernzerhof parametrization; (2) rotationally invariant DFT  +  U and (3) DFT  +  U with magnetic exchange, suggested herein. The U parameters used for DFT  +  U calculations have been evaluated by using a linear response method (this applies to DFT  +  U as well as DFT  +  U calculations with a magnetic exchange term). Moreover, we have performed adjustments of U and, for the case of magnetic exchange, J parameters, to find better agreement with experimental measurements of redox and magnetic properties. We find that a self-consistent DFT  +  U/linear response approach yields quite overestimated redox potentials as compared to experiment. On the other hand, we also show that DFT  +  U calculations are not capable of providing a reasonably accurate description of both redox and magnetic properties for the case of Li2Fe(SO4)2, even when adjusted U parameters are employed. As a solution, we demonstrate that a DFT  +  U methodology augmented by a magnetic exchange term potentially provides more precise values for both the redox potentials and the magnetic moments of the Fe ions in the studied materials. Thus our work shows that for a more accurate description of redox and magnetic properties, further extensions of the DFT  +  U method, such as inclusion of the contribution of magnetic exchange, should be considered.

  4. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  5. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  6. Quantum-chemical study of the effect of ligands on the structure and properties of gold clusters

    NASA Astrophysics Data System (ADS)

    Golosnaya, M. N.; Pichugina, D. A.; Oleinichenko, A. V.; Kuz'menko, N. E.

    2017-02-01

    The structures of [Au4(dpmp)2X2]2+clusters, where X =-C≡CH,-CH3,-SCH3,-F,-Cl,-Br,-I, dpmp is bis((diphenylphosphino)methyl)(phenyl)phosphine, are calculated at the level of density functional theory with the PBE functional and a modified Dirac-Coulomb-Breit Hamiltonian in an all-electron basis set (Λ). Using the example of [Au4(dpmp)2(C≡CC6H5)2]2+, the interatomic distances and bond angles calculated by means of PBE0/LANL2DZ, TPSS/LANL2DZ, TPSSh/LANL2DZ, and PBE/Λ are compared to X-ray crystallography data. It is shown that PBE/Λ yields the most accurate calculation of the geometrical parameters of this cluster. The ligand effect on the electronic stability of a cluster and the stability in reactions of decomposition into different fragments is studied, along with the capability of ligand exchange. Stability is predicted for [Au4(dpmp)2F2]2+ and [Au4(dpmp)2(SCH3)2]2+, while [Au4(dpmp)2I2]2+ cluster is unstable and its decomposes into two identical fragments is supposed.

  7. Enhanced angular overlap model for nonmetallic f -electron systems

    NASA Astrophysics Data System (ADS)

    Gajek, Z.

    2005-07-01

    An efficient method of interpretation of the crystal field effect in nonmetallic f -electron systems, the enhanced angular overlap model (EAOM), is presented. The method is established on the ground of perturbation expansion of the effective Hamiltonian for localized electrons and first-principles calculations related to available experimental data. The series of actinide compounds AO2 , oxychalcogenides AOX , and dichalcogenides UX2 where X=S ,Se,Te and A=U ,Np serve as probes of the effectiveness of the proposed method. An idea is to enhance the usual angular overlap model with ab initio calculations of those contributions to the crystal field potential, which cannot be represented by the usual angular overlap model (AOM). The enhancement leads to an improved fitting and makes the approach intrinsically coherent. In addition, the ab initio calculations of the main, AOM-consistent part of the crystal field potential allows one to fix the material-specific relations for the EAOM parameters in the effective Hamiltonian. Consequently, the electronic structure interpretation based on EAOM can be extended to systems of the lowest point symmetries or/and deficient experimental data. Several examples illustrating the promising capabilities of EAOM are given.

  8. Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS

    NASA Astrophysics Data System (ADS)

    Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.

    2014-07-01

    Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.

  9. A graphical approach to radio frequency quadrupole design

    NASA Astrophysics Data System (ADS)

    Turemen, G.; Unel, G.; Yasatekin, B.

    2015-07-01

    The design of a radio frequency quadrupole, an important section of all ion accelerators, and the calculation of its beam dynamics properties can be achieved using the existing computational tools. These programs, originally designed in 1980s, show effects of aging in their user interfaces and in their output. The authors believe there is room for improvement in both design techniques using a graphical approach and in the amount of analytical calculations before going into CPU burning finite element analysis techniques. Additionally an emphasis on the graphical method of controlling the evolution of the relevant parameters using the drag-to-change paradigm is bound to be beneficial to the designer. A computer code, named DEMIRCI, has been written in C++ to demonstrate these ideas. This tool has been used in the design of Turkish Atomic Energy Authority (TAEK)'s 1.5 MeV proton beamline at Saraykoy Nuclear Research and Training Center (SANAEM). DEMIRCI starts with a simple analytical model, calculates the RFQ behavior and produces 3D design files that can be fed to a milling machine. The paper discusses the experience gained during design process of SANAEM Project Prometheus (SPP) RFQ and underlines some of DEMIRCI's capabilities.

  10. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    PubMed

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    DOE PAGES

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less

  12. Comparative assessment of five water infiltration models into the soil

    NASA Astrophysics Data System (ADS)

    Shahsavaramir, M.

    2009-04-01

    The knowledge of the soil hydraulic conditions particularly soil permeability is an important issue hydrological and climatic study. Because of its high spatial and temporal variability, soil infiltration monitoring scheme was investigated in view of its application in infiltration modelling. Some of models for infiltration into the soil have been developed, in this paper; we design and describe capability of five infiltration model into the soil. We took a decision to select the best model suggested. In this research in the first time, we designed a program in Quick Basic software and wrote algorithm of five models that include Kostiakove, Modified Kostiakove, Philip, S.C.S and Horton. Afterwards we supplied amounts of factual infiltration, according of get at infiltration data, by double rings method in 12 series of Saveh plain which situated in Markazi province in Iran. After accessing to models coefficients, these equations were regenerated by Excel software and calculations related to models acuity rate in proportion to observations and also related graphs were done by this software. Amounts of infiltration parameters, such as cumulative infiltration and infiltration rate were obtained from designed models. Then we compared amounts of observation and determination parameters of infiltration. The results show that Kostiakove and Modified Kostiakove models could quantify amounts of cumulative infiltration and infiltration rate in triple period (short, middle and long time). In tree series of soils, Horton model could determine infiltration amounts better than others in time trinal treatments. The results show that Philip model in seven series had a relatively good fitness for determination of infiltration parameters. Also Philip model in five series of soils, after passing of time, had curve shape; in fact this shown that attraction coefficient (s) was less than zero. After all S.C.S model among of others had the least capability to determination of infiltration parameters.

  13. M-Split: A Graphical User Interface to Analyze Multilayered Anisotropy from Shear Wave Splitting

    NASA Astrophysics Data System (ADS)

    Abgarmi, Bizhan; Ozacar, A. Arda

    2017-04-01

    Shear wave splitting analysis are commonly used to infer deep anisotropic structure. For simple cases, obtained delay times and fast-axis orientations are averaged from reliable results to define anisotropy beneath recording seismic stations. However, splitting parameters show systematic variations with back azimuth in the presence of complex anisotropy and cannot be represented by average time delay and fast axis orientation. Previous researchers had identified anisotropic complexities at different tectonic settings and applied various approaches to model them. Most commonly, such complexities are modeled by using multiple anisotropic layers with priori constraints from geologic data. In this study, a graphical user interface called M-Split is developed to easily process and model multilayered anisotropy with capabilities to properly address the inherited non-uniqueness. M-Split program runs user defined grid searches through the model parameter space for two-layer anisotropy using formulation of Silver and Savage (1994) and creates sensitivity contour plots to locate local maximas and analyze all possible models with parameter tradeoffs. In order to minimize model ambiguity and identify the robust model parameters, various misfit calculation procedures are also developed and embedded to M-Split which can be used depending on the quality of the observations and their back-azimuthal coverage. Case studies carried out to evaluate the reliability of the program using real noisy data and for this purpose stations from two different networks are utilized. First seismic network is the Kandilli Observatory and Earthquake research institute (KOERI) which includes long term running permanent stations and second network comprises seismic stations deployed temporary as part of the "Continental Dynamics-Central Anatolian Tectonics (CD-CAT)" project funded by NSF. It is also worth to note that M-Split is designed as open source program which can be modified by users for additional capabilities or for other applications.

  14. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the inversion framework. The next step of using this framework to study the aerosol information content in GEO-TASO measurements is also discussed.

  15. A NTCP approach for estimating the outcome in radioiodine treatment of hyperthyroidism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strigari, L.; Sciuto, R.; Benassi, M.

    2008-09-15

    Radioiodine has been in use for over 60 years as a treatment for hyperthyroidism. Major changes in clinical practice have led to accurate dosimetry capable of avoiding the risks of adverse effects and the optimization of the treatment. The aim of this study was to test the capability of a radiobiological model, based on normal tissue complication probability (NTCP), to predict the outcome after oral therapeutic {sup 131}I administration. Following dosimetric study, 79 patients underwent treatment for hyperthyroidism using radioiodine and then 67 had at least a one-year follow up. The delivered dose was calculated using the MIRD formula, takingmore » into account the measured maximum uptake of administered iodine transferred to the thyroid, U0, and the effective clearance rate, T{sub eff} and target mass. The dose was converted to normalized total dose delivered at 2 Gy per fraction (NTD{sub 2}). Furthermore, the method to take into account the reduction of the mass of the gland during radioiodine therapy was also applied. The clinical outcome and dosimetric parameters were analyzed in order to study the dose-response relationship for hypothyroidism. The TD{sub 50} and m parameters of the NTCP model approach were then estimated using the likelihood method. The TD{sub 50}, expressed as NTD{sub 2}, resulted in 60 Gy (95% C.I.: 45-75 Gy) and 96 Gy (95% C.I.: 86-109 Gy) for patients affected by Graves or autonomous/multinodular disease, respectively. This supports the clinical evidence that Graves' disease should be characterized by more radiosensitive cells compared to autonomous nodules. The m parameter for all patients was 0.27 (95% C.I.: 0.22-0.36). These parameters were compared with those reported in the literature for hypothyroidism induced after external beam radiotherapy. The NTCP model correctly predicted the clinical outcome after the therapeutic administration of radioiodine in our series.« less

  16. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  17. New criteria for isotropic and textured metals

    NASA Astrophysics Data System (ADS)

    Cazacu, Oana

    2018-05-01

    In this paper a isotropic criterion expressed in terms of both invariants of the stress deviator, J2 and J3 is proposed. This criterion involves a unique parameter, α, which depends only on the ratio between the yield stresses in uniaxial tension and pure shear. If this parameter is zero, the von Mises yield criterion is recovered; if a is positive the yield surface is interior to the von Mises yield surface whereas when a is negative, the new yield surface is exterior to it. Comparison with polycrystalline calculations using Taylor-Bishop-Hill model [1] for randomly oriented face-centered (FCC) polycrystalline metallic materials show that this new criterion captures well the numerical yield points. Furthermore, the criterion reproduces well yielding under combined tension-shear loadings for a variety of isotropic materials. An extension of this isotropic yield criterion such as to account for orthotropy in yielding is developed using the generalized invariants approach of Cazacu and Barlat [2]. This new orthotropic criterion is general and applicable to three-dimensional stress states. The procedure for the identification of the material parameters is outlined. Illustration of the predictive capabilities of the new orthotropic is demonstrated through comparison between the model predictions and data on aluminum sheet samples.

  18. The application of neural networks to myoelectric signal analysis: a preliminary study.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1990-03-01

    Two neural network implementations are applied to myoelectric signal (MES) analysis tasks. The motivation behind this research is to explore more reliable methods of deriving control for multidegree of freedom arm prostheses. A discrete Hopfield network is used to calculate the time series parameters for a moving average MES model. It is demonstrated that the Hopfield network is capable of generating the same time series parameters as those produced by the conventional sequential least squares (SLS) algorithm. Furthermore, it can be extended to applications utilizing larger amounts of data, and possibly to higher order time series models, without significant degradation in computational efficiency. The second neural network implementation involves using a two-layer perceptron for classifying a single site MES based on two features, specifically the first time series parameter, and the signal power. Using these features, the perceptron is trained to distinguish between four separate arm functions. The two-dimensional decision boundaries used by the perceptron classifier are delineated. It is also demonstrated that the perceptron is able to rapidly compensate for variations when new data are incorporated into the training set. This adaptive quality suggests that perceptrons may provide a useful tool for future MES analysis.

  19. Evaluation of RAPID for a UNF cask benchmark problem

    NASA Astrophysics Data System (ADS)

    Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.

    2017-09-01

    This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.

  20. Power and energy ratios in mechanical CVT drive control

    NASA Astrophysics Data System (ADS)

    Balakin, P. D.; Stripling, L. O.

    2017-06-01

    Being based on the principle of providing the systems with adaptation property to the real parameters and operational condition, the mechanical system capable to control automatically the components of convertible power is offered and this allows providing stationary operation of the vehicle engine in the terms of variable external loading. This is achieved by drive control integrated in the power transmission, which implements an additional degree of freedom and operates on the basis of the laws of motion, with the energy of the main power flow by changing automatically the kinematic characteristics of the power transmission, this system being named CVT. The power and energy ratios found allow performing the necessary design calculations of the sections and the links of the mechanical CVT scheme.

  1. Temperature and composition dependence of the refractive indices of the 2-chloroethanol + 2-methoxyethanol binary mixtures.

    PubMed

    Cocchi, Marina; Manfredini, Matteo; Marchetti, Andrea; Pigani, Laura; Seeber, Renato; Tassi, Lorenzo; Ulrici, Alessandro; Vignali, Moris; Zanardi, Chiara; Zannini, Paolo

    2002-03-01

    Measurements of the refractive index n for the binary mixtures 2-chloroethanol + 2-methoxyethanol in the 0 < or = t/degree C < or = 70 temperature range have been carried out with the purpose of checking the capability of empirical models to express physical quantity as a function of temperature and volume fraction, both separately and together, i.e., in a two independent variables expression. Furthermore, the experimental data have been used to calculate excess properties such as the excess refractive index, the excess molar refraction, and the excess Kirkwood parameter delta g over the whole composition range. The quantities obtained have been discussed and interpreted in terms of the type and nature of the specific intermolecular interactions between the components.

  2. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  3. Semiempirical models of shear modulus at shock temperatures and pressures

    NASA Astrophysics Data System (ADS)

    Elkin, Vaytcheslav; Mikhaylov, Vadim; Mikhaylova, Tatiana

    2011-06-01

    The work is devoted to a comparison of capabilities the Steinberg-Cochran-Guinan and Burakovsky-Preston models of shear modulus offer for the description of experimental and calculated (ab initio) data at temperatures and pressures representative of solid state behind the shock front. Also, the SCG model is modernized by changing from the (P,V) variables to the (V,T) ones and adding a free parameter. The resulted model is then referred to as the (V,T)-model. The three models are tested for 9 metals (Al, Be, Cu, K, Na, Mg, Mo, W, Ta) with using ab initio and experimental values of shear modulus in a wide range of pressures as well as longitudinal sound velocities behind the shock front.

  4. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  5. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  6. Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.

    1987-01-01

    This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.

  7. Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.; Park, Michael A.

    2017-01-01

    The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.

  8. An algorithm for charge-integration, pulse-shape discrimination and estimation of neutron/photon misclassification in organic scintillators

    NASA Astrophysics Data System (ADS)

    Polack, J. K.; Flaska, M.; Enqvist, A.; Sosa, C. S.; Lawrence, C. C.; Pozzi, S. A.

    2015-09-01

    Organic scintillators are frequently used for measurements that require sensitivity to both photons and fast neutrons because of their pulse shape discrimination capabilities. In these measurement scenarios, particle identification is commonly handled using the charge-integration pulse shape discrimination method. This method works particularly well for high-energy depositions, but is prone to misclassification for relatively low-energy depositions. A novel algorithm has been developed for automatically performing charge-integration pulse shape discrimination in a consistent and repeatable manner. The algorithm is able to estimate the photon and neutron misclassification corresponding to the calculated discrimination parameters, and is capable of doing so using only the information measured by a single organic scintillator. This paper describes the algorithm and assesses its performance by comparing algorithm-estimated misclassification to values computed via a more traditional time-of-flight estimation. A single data set was processed using four different low-energy thresholds: 40, 60, 90, and 120 keVee. Overall, the results compared well between the two methods; in most cases, the algorithm-estimated values fell within the uncertainties of the TOF-estimated values.

  9. Constructing the matrix

    NASA Astrophysics Data System (ADS)

    Elliott, John

    2012-09-01

    As part of our 'toolkit' for analysing an extraterrestrial signal, the facility for calculating structural affinity to known phenomena must be part of our core capabilities. Without such a resource, we risk compromising our potential for detection and decipherment or at least causing significant delay in the process. To create such a repository for assessing structural affinity, all known systems (language parameters) need to be structurally analysed to 'place' their 'system' within a relational communication matrix. This will need to include all known variants of language structure, whether 'living' (in current use) or ancient; this must also include endeavours to incorporate yet undeciphered scripts and non-human communication, to provide as complete a picture as possible. In creating such a relational matrix, post-detection decipherment will be assisted by a structural 'map' that will have the potential for 'placing' an alien communication with its nearest known 'neighbour', to assist subsequent categorisation of basic parameters as a precursor to decipherment. 'Universal' attributes and behavioural characteristics of known communication structure will form a range of templates (Elliott, 2001 [1] and Elliott et al., 2002 [2]), to support and optimise our attempt at categorising and deciphering the content of an extraterrestrial signal. Detection of the hierarchical layers, which comprise intelligent, complex communication, will then form a matrix of calculations that will ultimately score affinity through a relational matrix of structural comparison. In this paper we develop the rationales and demonstrate functionality with initial test results.

  10. Numerical simulation of unmanned aerial vehicle under centrifugal load and optimization of milling and planing

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng; Lu, Xinghua

    2018-05-01

    The mechanical parts of the fuselage surface of the UAV are easily fractured by the action of the centrifugal load. In order to improve the compressive strength of UAV and guide the milling and planing of mechanical parts, a numerical simulation method of UAV fuselage compression under centrifugal load based on discrete element analysis method is proposed. The three-dimensional discrete element method is used to establish the splitting tensile force analysis model of the UAV fuselage under centrifugal loading. The micro-contact connection parameters of the UAV fuselage are calculated, and the yield tensile model of the mechanical components is established. The dynamic and static mechanical model of the aircraft fuselage milling is analyzed by the axial amplitude vibration frequency combined method. The correlation parameters of the cutting depth on the tool wear are obtained. The centrifugal load stress spectrum of the surface of the UAV is calculated. The meshing and finite element simulation of the rotor blade of the unmanned aerial vehicle is carried out to optimize the milling process. The test results show that the accuracy of the anti - compression numerical test of the UAV is higher by adopting the method, and the anti - fatigue damage capability of the unmanned aerial vehicle body is improved through the milling and processing optimization, and the mechanical strength of the unmanned aerial vehicle can be effectively improved.

  11. Adiabatically describing rare earths using microscopic deformations

    NASA Astrophysics Data System (ADS)

    Nobre, Gustavo; Dupuis, Marc; Herman, Michal; Brown, David

    2017-09-01

    Recent works showed that reactions on well-deformed nuclei in the rare-earth region are very well described by an adiabatic method. This assumes a spherical optical potential (OP) accounting for non-rotational degrees of freedom while the deformed configuration is described by couplings to states of the g.s. rotational band. This method has, apart from the global OP, only the deformation parameters as inputs, with no additional fit- ted variables. For this reason, it has only been applied to nuclei with well-measured deformations. With the new computational capabilities, microscopic large-scale calculations of deformation parameters within the HFB method based on the D1S Gogny force are available in the literature. We propose to use such microscopic deformations in our adi- abatic method, allowing us to reproduce the cross sections agreements observed in stable nuclei, and to reliably extend this description to nuclei far from stability, describing the whole rare-earth region. Since all cross sections, such as capture and charge exchange, strongly depend on the correct calculation of absorption from the incident channel (from direct reaction mechanisms), this approach significantly improves the accuracy of cross sections and transitions relevant to astrophysical studies. The work at BNL was sponsored by the Office of Nuclear Physics, Office of Science of the US Department of Energy, under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC.

  12. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications

    PubMed Central

    Bäck, Anna

    2013-01-01

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865

  13. Comparison of Dorris-Gray and Schultz methods for the calculation of surface dispersive free energy by inverse gas chromatography.

    PubMed

    Shi, Baoli; Wang, Yue; Jia, Lina

    2011-02-11

    Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.

  14. The r-Java 2.0 code: nuclear physics

    NASA Astrophysics Data System (ADS)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  15. skelesim: an extensible, general framework for population genetic simulation in R.

    PubMed

    Parobek, Christian M; Archer, Frederick I; DePrenger-Levin, Michelle E; Hoban, Sean M; Liggins, Libby; Strand, Allan E

    2017-01-01

    Simulations are a key tool in molecular ecology for inference and forecasting, as well as for evaluating new methods. Due to growing computational power and a diversity of software with different capabilities, simulations are becoming increasingly powerful and useful. However, the widespread use of simulations by geneticists and ecologists is hindered by difficulties in understanding these softwares' complex capabilities, composing code and input files, a daunting bioinformatics barrier and a steep conceptual learning curve. skelesim (an R package) guides users in choosing appropriate simulations, setting parameters, calculating genetic summary statistics and organizing data output, in a reproducible pipeline within the R environment. skelesim is designed to be an extensible framework that can 'wrap' around any simulation software (inside or outside the R environment) and be extended to calculate and graph any genetic summary statistics. Currently, skelesim implements coalescent and forward-time models available in the fastsimcoal2 and rmetasim simulation engines to produce null distributions for multiple population genetic statistics and marker types, under a variety of demographic conditions. skelesim is intended to make simulations easier while still allowing full model complexity to ensure that simulations play a fundamental role in molecular ecology investigations. skelesim can also serve as a teaching tool: demonstrating the outcomes of stochastic population genetic processes; teaching general concepts of simulations; and providing an introduction to the R environment with a user-friendly graphical user interface (using shiny). © 2016 John Wiley & Sons Ltd.

  16. skeleSim: an extensible, general framework for population genetic simulation in R

    PubMed Central

    Parobek, Christian M.; Archer, Frederick I.; DePrenger-Levin, Michelle E.; Hoban, Sean M.; Liggins, Libby; Strand, Allan E.

    2016-01-01

    Simulations are a key tool in molecular ecology for inference and forecasting, as well as for evaluating new methods. Due to growing computational power and a diversity of software with different capabilities, simulations are becoming increasingly powerful and useful. However, the widespread use of simulations by geneticists and ecologists is hindered by difficulties in understanding these softwares’ complex capabilities, composing code and input files, a daunting bioinformatics barrier, and a steep conceptual learning curve. skeleSim (an R package) guides users in choosing appropriate simulations, setting parameters, calculating genetic summary statistics, and organizing data output, in a reproducible pipeline within the R environment. skeleSim is designed to be an extensible framework that can ‘wrap’ around any simulation software (inside or outside the R environment) and be extended to calculate and graph any genetic summary statistics. Currently, skeleSim implements coalescent and forward-time models available in the fastsimcoal2 and rmetasim simulation engines to produce null distributions for multiple population genetic statistics and marker types, under a variety of demographic conditions. skeleSim is intended to make simulations easier while still allowing full model complexity to ensure that simulations play a fundamental role in molecular ecology investigations. skeleSim can also serve as a teaching tool: demonstrating the outcomes of stochastic population genetic processes; teaching general concepts of simulations; and providing an introduction to the R environment with a user-friendly graphical user interface (using shiny). PMID:27736016

  17. Characterization and Design of Spiral Frequency Steerable Acoustic Transducers

    NASA Astrophysics Data System (ADS)

    Repale, Rohan

    Structural Health Monitoring (SHM) is an emerging research area devoted to improving the safety and maintainability of civil structures. Guided wave structural testing method is an effective approach used for SHM of plate-like structures using piezoelectric transducers. These transducers are attached to the surface of the structure and are capable of sensing its health by using surface waves. Transducers with beam steering i.e. electronic scanning capabilities can perform surface interrogation with higher precision and ease. A frequency steerable acoustic transducer (FSAT) is capable of beam steering and directional surface wave sensing to detect and localize damage in structures. The objective of this research is to further explore the possibilities of FSAT technology by designing and testing new FSAT designs. The beam steering capability of FSAT can be controlled by manipulating its design parameters. These design parameters therefore play a significant role in FSAT's performance. Studying the design parameters and documenting the performance improvements based on parameter variation is the primary goal of this research. Design and characterization of spiral FSAT was performed and results were simulated. Array FSAT documented results were validated. Modified designs were modeled based on design parameter variations. Characterization of these designs was done and their performance was recorded. Plate simulation results confirm direct relationship between design parameters and beam steering. A set of guidelines for future designs was also proposed. Two designs developed based on the set guidelines were sent to our collaborator Genziko Inc. for fabrication.

  18. Numerical calculation of the parameters of the efflux from a helium dewar used for cooling of heat shields in a satellite

    NASA Technical Reports Server (NTRS)

    Brendley, K.; Chato, J. C.

    1982-01-01

    The parameters of the efflux from a helium dewar in space were numerically calculated. The flow was modeled as a one dimensional compressible ideal gas with variable properties. The primary boundary conditions are flow with friction and flow with heat transfer and friction. Two PASCAL programs were developed to calculate the efflux parameters: EFFLUZD and EFFLUXM. EFFLUXD calculates the minimum mass flow for the given shield temperatures and shield heat inputs. It then calculates the pipe lengths, diameter, and fluid parameters which satisfy all boundary conditions. Since the diameter returned by EFFLUXD is only rarely of nominal size, EFFLUXM calculates the mass flow and shield heat exchange for given pipe lengths, diameter, and shield temperatures.

  19. Modeling electron emission and surface effects from diamond cathodes

    DOE PAGES

    Dimitrov, D. A.; Smithe, D.; Cary, J. R.; ...

    2015-02-05

    We developed modeling capabilities, within the Vorpal particle-in-cell code, for three-dimensional (3D) simulations of surface effects and electron emission from semiconductor photocathodes. They include calculation of emission probabilities using general, piece-wise continuous, space-time dependent surface potentials, effective mass and band bending field effects. We applied these models, in combination with previously implemented capabilities for modeling charge generation and transport in diamond, to investigate the emission dependence on applied electric field in the range from approximately 2 MV/m to 17 MV/m along the [100] direction. The simulation results were compared to experimental data. For the considered parameter regime, conservation of transversemore » electron momentum (in the plane of the emission surface) allows direct emission from only two (parallel to [100]) of the six equivalent lowest conduction band valleys. When the electron affinity χ is the only parameter varied in the simulations, the value χ = 0.31 eV leads to overall qualitative agreement with the probability of emission deduced from experiments. Including band bending in the simulations improves the agreement with the experimental data, particularly at low applied fields, but not significantly. In this study, using surface potentials with different profiles further allows us to investigate the emission as a function of potential barrier height, width, and vacuum level position. However, adding surface patches with different levels of hydrogenation, modeled with position-dependent electron affinity, leads to the closest agreement with the experimental data.« less

  20. Dependence of acoustic levitation capabilities on geometric parameters.

    PubMed

    Xie, W J; Wei, B

    2002-08-01

    A two-cylinder model incorporating boundary element method simulations is developed, which builds up the relationship between the levitation capabilities and the geometric parameters of a single-axis acoustic levitator with reference to wavelength. This model proves to be successful in predicting resonant modes of the acoustic field and explaining axial symmetry deviation of the levitated samples near the reflector and emitter. Concave reflecting surfaces of a spherical cap, a paraboloid, and a hyperboloid of revolution are investigated systematically with regard to the dependence of the levitation force on the section radius R(b) and curvature radius R (or depth D) of the reflector. It is found that the levitation force can be remarkably enhanced by choosing an optimum value of R or D, and the possible degree of this enhancement for spherically curved reflectors is the largest. The degree of levitation force enhancement by this means can also be facilitated by enlarging R(b) and employing a lower resonant mode. The deviation of the sample near the reflector is found likely to occur in case of smaller R(b), larger D, and a higher resonant mode. The calculated dependence of levitation force on R, R(b), and the resonant mode is also verified by experiment and finally demonstrated to be in good agreement with experimental results, in which considerably a strong levitation force is achieved to levitate an iridium sphere which has the largest density of 22.6 g/cm(3).

  1. An MR-compatible gyroscope-based arm movement tracking system.

    PubMed

    Shirinbayan, S Iman; Rieger, Jochem W

    2017-03-15

    Functional magnetic resonance imaging is well suited to link neural population activation with movement parameters of complex natural arm movements. However, currently existing MR-compatible arm tracking devices are not constructed to measure arm joint movement parameters of unrestricted movements. Therefore, to date most research focuses on simple arm movements or includes very little knowledge about the actual movement kinematics. We developed a low cost gyroscope-based arm movement tracking system (GAMTS) that features MR-compatibility. The system consists of dual-axis analogue gyroscopes that measure rotations of upper and lower arm joints. After MR artifact reduction, the rotation angles of the individual arm joints are calculated and used to animate a realistic arm model that is implemented in the OpenSim platform. The OpenSim platform can then provide the kinematics of any point on the arm model. In order to demonstrate the capabilities of the system, we first assessed the quality of reconstructed wrist movements in a low-noise environment where typical MR-related problems are absent and finally, we validated the reconstruction in the MR environment. The system provides the kinematics of the whole arm when natural unrestricted arm movements are performed inside the MR-scanner. The GAMTS is reliably capable of reconstructing the kinematics of trajectories and the reconstruction error is small in comparison with the movement induced variation of speed, displacement, and rotation. Moreover, the system can be used to probe brain areas for their correlation with movement kinematics. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Pyrolysis Model Development for a Multilayer Floor Covering

    PubMed Central

    McKinnon, Mark B.; Stoliarov, Stanislav I.

    2015-01-01

    Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556

  3. Correlation of emission capability and longevity of dispenser cathodes with characteristics of tungsten powders

    NASA Astrophysics Data System (ADS)

    Melnikova, Irina P.; Vorozheikin, Victor G.; Usanov, Dmitry A.

    2003-06-01

    The intercorrelation of tungsten powder properties, such as grain size, distribution and morphology, and porous matrix parameters with electron emission capability and longevity of Ba dispenser cathodes are investigated for three different grain morphologies. Best results of tungsten cathode life were found for isoaxis polyhedron morphology in combination with certain powder and matrix parameters.

  4. Air Force Handbook. 109th Congress

    DTIC Science & Technology

    2009-01-01

    FY06 Combat Survivor Evader Locator (CSEL) Acquisition Status Capabilities/Profile Functions /Performance Parameters 38 • Air Force’s primary source for...Broadcast Service (GBS) Capabilities/Profile Acquisition Status Functions /Performance Parameters • Purchase Requirements (Phase 2): • 3 primary ...Operations (AF CONOPS) that support the CSAF and joint vision of combat operations. • AF CONOPS describe key Air Force mission and/or functional areas

  5. A Serum miR Signature Specific to Low-Risk Prostate Cancer

    DTIC Science & Technology

    2017-09-01

    useful pre-treatment risk calculators that use clinical parameters (age, biopsy grade, PSA ). These calculators accurately identify high-risk patients...aggressive disease. There are several useful pre-treatment risk calculators that use clinical parameters (age, biopsy grade, PSA ). These calculators

  6. Parameter estimation accuracies of Galactic binaries with eLISA

    NASA Astrophysics Data System (ADS)

    Błaut, Arkadiusz

    2018-09-01

    We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.

  7. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  8. Calculating the mounting parameters for Taylor Spatial Frame correction using computed tomography.

    PubMed

    Kucukkaya, Metin; Karakoyun, Ozgur; Armagan, Raffi; Kuzgun, Unal

    2011-07-01

    The Taylor Spatial Frame uses a computer program-based six-axis deformity analysis. However, there is often a residual deformity after the initial correction, especially in deformities with a rotational component. This problem can be resolved by recalculating the parameters and inputting all new deformity and mounting parameters. However, this may necessitate repeated x-rays and delay treatment. We believe that error in the mounting parameters is the main reason for most residual deformities. To prevent these problems, we describe a new calculation technique for determining the mounting parameters that uses computed tomography. This technique is especially advantageous for deformities with a rotational component. Using this technique, exact calculation of the mounting parameters is possible and the residual deformity and number of repeated x-rays can be minimized. This new technique is an alternative method to accurately calculating the mounting parameters.

  9. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  10. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  11. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less

  12. High-pressure, high-temperature Raman spectroscopy of Ca 2GeO 4 (olivine form): some insights on anharmonicity

    NASA Astrophysics Data System (ADS)

    Gillet, Philippe; Guyot, Francois; Malezieux, Jean-Marie

    1989-12-01

    High pressure (up to 2.7 GPa) and high temperature (up to 1000 K) Raman spectra of Ca 2GeO 4 (olivine form) have been recorded. Measurements of the pressure- and temperature-induced frequency shifts of 14 modes have been performed. The classical mode Gruneisen parameter and a corresponding parameter related to temperature variation are calculated. For the high frequency modes (GeO stretching) we calculate these parameters with local tetrahedral elastic parameters. From these parameters anharmonic parameters are calculated for each Raman active mode. The effect of anharmonicity on the specific heat is calculated and compared with calorimetric data. Taking anharmonicity into account leads to a departure from the Dulong and Petit limit of the order of 2% at 1000 K and more than 6% at 2000 K, in good accord with experimental data. We propose that, eventually, such effects might be significant in the calculations of thermodynamic properties of mantle silicates like forsterite and its polymorphs.

  13. EPR, optical and superposition model study of Mn2+ doped L+ glutamic acid

    NASA Astrophysics Data System (ADS)

    Kripal, Ram; Singh, Manju

    2015-12-01

    Electron paramagnetic resonance (EPR) study of Mn2+ doped L+ glutamic acid single crystal is done at room temperature. Four interstitial sites are observed and the spin Hamiltonian parameters are calculated with the help of large number of resonant lines for various angular positions of external magnetic field. The optical absorption study is also done at room temperature. The energy values for different orbital levels are calculated, and observed bands are assigned as transitions from 6A1g(s) ground state to various excited states. With the help of these assigned bands, Racah inter-electronic repulsion parameters B = 869 cm-1, C = 2080 cm-1 and cubic crystal field splitting parameter Dq = 730 cm-1 are calculated. Zero field splitting (ZFS) parameters D and E are calculated by the perturbation formulae and crystal field parameters obtained using superposition model. The calculated values of ZFS parameters are in good agreement with the experimental values obtained by EPR.

  14. Global volcanic aerosol properties derived from emissions, 1990-2014, using CESM1(WACCM)

    NASA Astrophysics Data System (ADS)

    Mills, Michael J.; Schmidt, Anja; Easter, Richard; Solomon, Susan; Kinnison, Douglas E.; Ghan, Steven J.; Neely, Ryan R.; Marsh, Daniel R.; Conley, Andrew; Bardeen, Charles G.; Gettelman, Andrew

    2016-03-01

    Accurate representation of global stratospheric aerosols from volcanic and nonvolcanic sulfur emissions is key to understanding the cooling effects and ozone losses that may be linked to volcanic activity. Attribution of climate variability to volcanic activity is of particular interest in relation to the post-2000 slowing in the rate of global average temperature increases. We have compiled a database of volcanic SO2 emissions and plume altitudes for eruptions from 1990 to 2014 and developed a new prognostic capability for simulating stratospheric sulfate aerosols in the Community Earth System Model. We used these combined with other nonvolcanic emissions of sulfur sources to reconstruct global aerosol properties from 1990 to 2014. Our calculations show remarkable agreement with ground-based lidar observations of stratospheric aerosol optical depth (SAOD) and with in situ measurements of stratospheric aerosol surface area density (SAD). These properties are key parameters in calculating the radiative and chemical effects of stratospheric aerosols. Our SAOD calculations represent a clear improvement over available satellite-based analyses, which generally ignore aerosol extinction below 15 km, a region that can contain the vast majority of stratospheric aerosol extinction at middle and high latitudes. Our SAD calculations greatly improve on that provided for the Chemistry-Climate Model Initiative, which misses about 60% of the SAD measured in situ on average during both volcanically active and volcanically quiescent periods.

  15. Hansen solubility parameters for polyethylene glycols by inverse gas chromatography.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2006-11-03

    Inverse gas chromatography (IGC) has been applied to determine solubility parameter and its components for nonionic surfactants--polyethylene glycols (PEG) of different molecular weight. Flory-Huggins interaction parameter (chi) and solubility parameter (delta(2)) were calculated according to DiPaola-Baranyi and Guillet method from experimentally collected retention data for the series of carefully selected test solutes. The Hansen's three-dimensional solubility parameters concept was applied to determine components (delta(d), delta(p), delta(h)) of corrected solubility parameter (delta(T)). The molecular weight and temperature of measurement influence the solubility parameter data, estimated from the slope, intercept and total solubility parameter. The solubility parameters calculated from the intercept are lower than those calculated from the slope. Temperature and structural dependences of the entopic factor (chi(S)) are presented and discussed.

  16. Unleashing Empirical Equations with "Nonlinear Fitting" and "GUM Tree Calculator"

    NASA Astrophysics Data System (ADS)

    Lovell-Smith, J. W.; Saunders, P.; Feistel, R.

    2017-10-01

    Empirical equations having large numbers of fitted parameters, such as the international standard reference equations published by the International Association for the Properties of Water and Steam (IAPWS), which form the basis of the "Thermodynamic Equation of Seawater—2010" (TEOS-10), provide the means to calculate many quantities very accurately. The parameters of these equations are found by least-squares fitting to large bodies of measurement data. However, the usefulness of these equations is limited since uncertainties are not readily available for most of the quantities able to be calculated, the covariance of the measurement data is not considered, and further propagation of the uncertainty in the calculated result is restricted since the covariance of calculated quantities is unknown. In this paper, we present two tools developed at MSL that are particularly useful in unleashing the full power of such empirical equations. "Nonlinear Fitting" enables propagation of the covariance of the measurement data into the parameters using generalized least-squares methods. The parameter covariance then may be published along with the equations. Then, when using these large, complex equations, "GUM Tree Calculator" enables the simultaneous calculation of any derived quantity and its uncertainty, by automatic propagation of the parameter covariance into the calculated quantity. We demonstrate these tools in exploratory work to determine and propagate uncertainties associated with the IAPWS-95 parameters.

  17. Experimental analysis and simulation calculation of the inductances of loosely coupled transformer

    NASA Astrophysics Data System (ADS)

    Kerui, Chen; Yang, Han; Yan, Zhang; Nannan, Gao; Ying, Pei; Hongbo, Li; Pei, Li; Liangfeng, Guo

    2017-11-01

    The experimental design of iron-core wireless power transmission system is designed, and an experimental model of loosely coupled transformer is built. Measuring the air gap on both sides of the transformer 15mm inductor under the parameters. The feasibility and feasibility of using the finite element method to calculate the coil inductance parameters of the loosely coupled transformer are analyzed. The system was modeled by ANSYS, and the magnetic field was calculated by finite element method, and the inductance parameters were calculated. The finite element method is used to calculate the inductive parameters of the loosely coupled transformer, and the basis for the accurate compensation of the capacitance of the wireless power transmission system is established.

  18. Bending of an Infinite beam on a base with two parameters in the absence of a part of the base

    NASA Astrophysics Data System (ADS)

    Aleksandrovskiy, Maxim; Zaharova, Lidiya

    2018-03-01

    Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.

  19. Performance and blood monitoring in sports: the artificial intelligence evoking target testing in antidoping (AR.I.E.T.T.A.) project.

    PubMed

    Manfredini, A F; Malagoni, A M; Litmanen, H; Zhukovskaja, L; Jeannier, P; Dal Follo, D; Felisatti, M; Besseberg, A; Geistlinger, M; Bayer, P; Carrabre, J E

    2011-03-01

    Substances and methods used to increase oxygen blood transport and physical performance can be detected in the blood, but the screening of the athletes to be tested remains a critical issue for the International Federations. This project, AR.I.E.T.T.A., aimed to develop a software capable of analysing athletes' hematological and performance profiles to detect abnormal patterns. One-hundred eighty athletes belonging to the International Biathlon Union gave written informed consent to have their hematological data, previously collected according to anti-doping rules, used to develop the AR.I.E.T.T.A. software. Software was developed with the included sections: 1) log-in; 2) data-entry: where data are loaded, stored and grouped; 3) analysis: where data are analysed, validated scores are calculated, and parameters are simultaneously displayed as statistics, tables and graphs, and individual or subpopulation profiles; 4) screening: where an immediate evaluation of the risk score of the present sample and/or the athlete under study is obtained. The sample risk score or AR.I.E.T.T.A. score is calculated by a simple computational system combining different parameters (absolute values and intra-individual variations) considered concurrently. The AR.I.E.T.T.A. score is obtained by the sum of the deviation units derived from each parameter, considering the shift of the present value from the reference values, based on the number of standard deviations. AR.I.E.T.T.A. enables a quick evaluation of blood results assisting surveillance programs and perform timely target testing controls on athletes by the International Federations. Future studies aiming to validate the AR.I.E.T.T.A. score and improve the diagnostic accuracy will improve the system.

  20. Fully Polarimetric Passive W-band Millimeter Wave Imager for Wide Area Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tedeschi, Jonathan R.; Bernacki, Bruce E.; Sheen, David M.

    2013-09-27

    We describe the design and phenomenology imaging results of a fully polarimetric W-band millimeter wave (MMW) radiometer developed by Pacific Northwest National Laboratory for wide-area search. Operating from 92 - 94 GHz, the W-band radiometer employs a Dicke switching heterodyne design isolating the horizontal and vertical mm-wave components with 40 dB of polarization isolation. Design results are presented for both infinite conjugate off-axis parabolic and finite conjugate off-axis elliptical fore-optics using optical ray tracing and diffraction calculations. The received linear polarizations are down-converted to a microwave frequency band and recombined in a phase-shifting network to produce all six orthogonal polarizationmore » states of light simultaneously, which are used to calculate the Stokes parameters for display and analysis. The resulting system performance produces a heterodyne receiver noise equivalent delta temperature (NEDT) of less than 150m Kelvin. The radiometer provides novel imaging capability by producing all four of the Stokes parameters of light, which are used to create imagery based on the polarization states associated with unique scattering geometries and their interaction with the down welling MMW energy. The polarization states can be exploited in such a way that man-made objects can be located and highlighted in a cluttered scene using methods such as image comparison, color encoding of Stokes parameters, multivariate image analysis, and image fusion with visible and infrared imagery. We also present initial results using a differential imaging approach used to highlight polarization features and reduce common-mode noise. Persistent monitoring of a scene using the polarimetric passive mm-wave technique shows great promise for anomaly detection caused by human activity.« less

  1. Broadband Ground Motion Synthesis of the 1999 Turkey Earthquakes Based On: 3-D Velocity Inversion, Finite Difference Calculations and Emprical Greens Functions

    NASA Astrophysics Data System (ADS)

    Gok, R.; Kalafat, D.; Hutchings, L.

    2003-12-01

    We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.

  2. Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.

    PubMed

    Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan

    2017-01-01

    Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.

  3. A computational framework for automation of point defect calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  4. A computational framework for automation of point defect calculations

    DOE PAGES

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...

    2017-01-13

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  5. MODFLOW-2000 : the U.S. Geological Survey modular ground-water model--documentation of the Advective-Transport Observation (ADV2) Package

    USGS Publications Warehouse

    Anderman, Evan R.; Hill, Mary Catherine

    2001-01-01

    Observations of the advective component of contaminant transport in steady-state flow fields can provide important information for the calibration of ground-water flow models. This report documents the Advective-Transport Observation (ADV2) Package, version 2, which allows advective-transport observations to be used in the three-dimensional ground-water flow parameter-estimation model MODFLOW-2000. The ADV2 Package is compatible with some of the features in the Layer-Property Flow and Hydrogeologic-Unit Flow Packages, but is not compatible with the Block-Centered Flow or Generalized Finite-Difference Packages. The particle-tracking routine used in the ADV2 Package duplicates the semi-analytical method of MODPATH, as shown in a sample problem. Particles can be tracked in a forward or backward direction, and effects such as retardation can be simulated through manipulation of the effective-porosity value used to calculate velocity. Particles can be discharged at cells that are considered to be weak sinks, in which the sink applied does not capture all the water flowing into the cell, using one of two criteria: (1) if there is any outflow to a boundary condition such as a well or surface-water feature, or (2) if the outflow exceeds a user specified fraction of the cell budget. Although effective porosity could be included as a parameter in the regression, this capability is not included in this package. The weighted sum-of-squares objective function, which is minimized in the Parameter-Estimation Process, was augmented to include the square of the weighted x-, y-, and z-components of the differences between the simulated and observed advective-front locations at defined times, thereby including the direction of travel as well as the overall travel distance in the calibration process. The sensitivities of the particle movement to the parameters needed to minimize the objective function are calculated for any particle location using the exact sensitivity-equation approach; the equations are derived by taking the partial derivatives of the semi-analytical particle-tracking equation with respect to the parameters. The ADV2 Package is verified by showing that parameter estimation using advective-transport observations produces the true parameter values in a small but complicated test case when exact observations are used. To demonstrate how the ADV2 Package can be used in practice, a field application is presented. In this application, the ADV2 Package is used first in the Sensitivity-Analysis mode of MODFLOW-2000 to calculate measures of the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Cape Cod, Massachusetts. The ADV2 Package is then used in the Parameter-Estimation mode of MODFLOW-2000 to determine best-fit parameter values. It is concluded that, for this problem, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and the use of formal parameter-estimation methods and related techniques produced significant insight into the physical system.

  6. Interpretation of Source Parameters from Total Gradient of Gravity and Magnetic Anomalies Caused by Thin Dyke using Nonlinear Global Optimization Technique

    NASA Astrophysics Data System (ADS)

    Biswas, A.

    2016-12-01

    A proficient way to deal with appraisal model parameters from total gradient of gravity and magnetic data in light of Very Fast Simulated Annealing (VFSA) has been exhibited. This is the first run through of applying VFSA in deciphering total gradient of potential field information with another detailing estimation brought on because of detached causative sources installed in the subsurface. The model parameters translated here are the amplitude coefficient (k), accurate origin of causative source (x0) depth (z0) and the shape factor (q). The outcome of VFSA improvement demonstrates that it can exceptionally decide all the model parameters when shape variable is fixed. The model parameters assessed by the present strategy, for the most part the shape and depth of the covered structures was observed to be in astounding concurrence with the genuine parameters. The technique has likewise the capability of dodging very uproarious information focuses and enhances the understanding results. Investigation of Histogram and cross-plot examination likewise proposes the translation inside the assessed ambiguity. Inversion of noise-free and noisy synthetic data information for single structures and field information shows the viability of the methodology. The procedure has been carefully and adequately connected to genuine field cases (Leona Anomaly, Senegal for gravity and Pima copper deposit, USA for magnetic) with the nearness of mineral bodies. The present technique can be to a great degree material for mineral investigation or ore bodies of dyke-like structure rooted in the shallow and more deep subsurface. The calculation time for the entire procedure is short.

  7. A retrospective analysis for patient-specific quality assurance of volumetric-modulated arc therapy plans.

    PubMed

    Li, Guangjun; Wu, Kui; Peng, Guang; Zhang, Yingjie; Bai, Sen

    2014-01-01

    Volumetric-modulated arc therapy (VMAT) is now widely used clinically, as it is capable of delivering a highly conformal dose distribution in a short time interval. We retrospectively analyzed patient-specific quality assurance (QA) of VMAT and examined the relationships between the planning parameters and the QA results. A total of 118 clinical VMAT cases underwent pretreatment QA. All plans had 3-dimensional diode array measurements, and 69 also had ion chamber measurements. Dose distribution and isocenter point dose were evaluated by comparing the measurements and the treatment planning system (TPS) calculations. In addition, the relationship between QA results and several planning parameters, such as dose level, control points (CPs), monitor units (MUs), average field width, and average leaf travel, were also analyzed. For delivered dose distribution, a gamma analysis passing rate greater than 90% was obtained for all plans and greater than 95% for 100 of 118 plans with the 3%/3-mm criteria. The difference (mean ± standard deviation) between the point doses measured by the ion chamber and those calculated by TPS was 0.9% ± 2.0% for all plans. For all cancer sites, nasopharyngeal carcinoma and gastric cancer have the lowest and highest average passing rates, respectively. From multivariate linear regression analysis, the dose level (p = 0.001) and the average leaf travel (p < 0.001) showed negative correlations with the passing rate, and the average field width (p = 0.003) showed a positive correlation with the passing rate, all indicating a correlation between the passing rate and the plan complexity. No statistically significant correlation was found between MU or CP and the passing rate. Analysis of the results of dosimetric pretreatment measurements as a function of VMAT plan parameters can provide important information to guide the plan parameter setting and optimization in TPS. Copyright © 2014 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  8. A retrospective analysis for patient-specific quality assurance of volumetric-modulated arc therapy plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Guangjun; Wu, Kui; Peng, Guang

    2014-01-01

    Volumetric-modulated arc therapy (VMAT) is now widely used clinically, as it is capable of delivering a highly conformal dose distribution in a short time interval. We retrospectively analyzed patient-specific quality assurance (QA) of VMAT and examined the relationships between the planning parameters and the QA results. A total of 118 clinical VMAT cases underwent pretreatment QA. All plans had 3-dimensional diode array measurements, and 69 also had ion chamber measurements. Dose distribution and isocenter point dose were evaluated by comparing the measurements and the treatment planning system (TPS) calculations. In addition, the relationship between QA results and several planning parameters,more » such as dose level, control points (CPs), monitor units (MUs), average field width, and average leaf travel, were also analyzed. For delivered dose distribution, a gamma analysis passing rate greater than 90% was obtained for all plans and greater than 95% for 100 of 118 plans with the 3%/3-mm criteria. The difference (mean ± standard deviation) between the point doses measured by the ion chamber and those calculated by TPS was 0.9% ± 2.0% for all plans. For all cancer sites, nasopharyngeal carcinoma and gastric cancer have the lowest and highest average passing rates, respectively. From multivariate linear regression analysis, the dose level (p = 0.001) and the average leaf travel (p < 0.001) showed negative correlations with the passing rate, and the average field width (p = 0.003) showed a positive correlation with the passing rate, all indicating a correlation between the passing rate and the plan complexity. No statistically significant correlation was found between MU or CP and the passing rate. Analysis of the results of dosimetric pretreatment measurements as a function of VMAT plan parameters can provide important information to guide the plan parameter setting and optimization in TPS.« less

  9. Strategies and Approaches to TPS Design

    NASA Technical Reports Server (NTRS)

    Kolodziej, Paul

    2005-01-01

    Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.

  10. Assessment of Satellite Capabilities to Detect Impacts of Oil and Natural Gas Activity by Analysis of SONGNEX 2015 Aircraft Measurements

    NASA Astrophysics Data System (ADS)

    Thayer, M. P.; Keutsch, F. N.; Wolfe, G.; St Clair, J. M.; Hanisco, T. F.; Aikin, K. C.; Brown, S. S.; Dubé, W.; Eilerman, S. J.; Gilman, J.; De Gouw, J. A.; Koss, A.; Lerner, B. M.; Neuman, J. A.; Peischl, J.; Ryerson, T. B.; Thompson, C. R.; Veres, P. R.; Warneke, C.; Washenfelder, R. A.; Wild, R. J.; Womack, C.; Yuan, B.; Zarzana, K. J.

    2017-12-01

    In the last decade, the rate of domestic energy production from oil and natural gas has grown dramatically, resulting in increased concurrent emissions of methane and other volatile organic compounds (VOCs). Products of VOC oxidation and radical cycling, such as tropospheric ozone (O3) and secondary organic aerosols (SOA), have detrimental impacts on human health and climate. The ability to monitor these emissions and their impact on atmospheric composition from remote-sensing platforms will benefit public health by improving air quality forecasts and identifying localized drivers of tropospheric pollution. New satellite-based instruments, such as TROPOMI (October 2017 launch) and TEMPO (2019-2021 projected launch), will be capable of measuring chemical species related to energy drilling and production on unprecedented spatial and temporal scales, however there is need for improved assessments of their capabilities with respect to specific applications. We use chemical and physical parameters measured via aircraft in the boundary layer and free troposphere during the Shale Oil and Natural Gas Nexus (SONGNEX 2015) field campaign to view chemical enhancements over tight oil and shale gas basins from a satellite perspective. Our in-situ data are used to calculate the planetary boundary layer contributions to the column densities for formaldehyde, glyoxal, O3, and NO2. We assess the spatial resolution and chemical precisions necessary to resolve various chemical features, and compare these limits to TEMPO and TROPOMI capabilities to show the degree to which their retrievals will be able to discern the signatures of oil and natural gas activity.

  11. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    NASA Astrophysics Data System (ADS)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  12. 3D Human cartilage surface characterization by optical coherence tomography.

    PubMed

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-07

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman's rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D surface profile parameters investigated were capable of reliably differentiating healthy from early-degenerative cartilage, while scan area sizes considerably affected parameter values. In conclusion, cartilage surface integrity may be adequately assessed by 3D surface profile parameters, which should be used in combination for the comprehensive and thorough evaluation and overall improved diagnostic performance. OCT- and image-based surface assessment could become a valuable adjunct tool to standard arthroscopy.

  13. 3D Human cartilage surface characterization by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D surface profile parameters investigated were capable of reliably differentiating healthy from early-degenerative cartilage, while scan area sizes considerably affected parameter values. In conclusion, cartilage surface integrity may be adequately assessed by 3D surface profile parameters, which should be used in combination for the comprehensive and thorough evaluation and overall improved diagnostic performance. OCT- and image-based surface assessment could become a valuable adjunct tool to standard arthroscopy.

  14. Third order maximum-principle-satisfying direct discontinuous Galerkin methods for time dependent convection diffusion equations on unstructured triangular meshes

    DOE PAGES

    Chen, Zheng; Huang, Hongying; Yan, Jue

    2015-12-21

    We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less

  15. Finite Element Simulation of Shot Peening: Prediction of Residual Stresses and Surface Roughness

    NASA Astrophysics Data System (ADS)

    Gariépy, Alexandre; Perron, Claude; Bocher, Philippe; Lévesque, Martin

    Shot peening is a surface treatment that consists of bombarding a ductile surface with numerous small and hard particles. Each impact creates localized plastic strains that permanently stretch the surface. Since the underlying material constrains this stretching, compressive residual stresses are generated near the surface. This process is commonly used in the automotive and aerospace industries to improve fatigue life. Finite element analyses can be used to predict residual stress profiles and surface roughness created by shot peening. This study investigates further the parameters and capabilities of a random impact model by evaluating the representative volume element and the calculated stress distribution. Using an isotropic-kinematic hardening constitutive law to describe the behaviour of AA2024-T351 aluminium alloy, promising results were achieved in terms of residual stresses.

  16. Bands dispersion and charge transfer in β-BeH2

    NASA Astrophysics Data System (ADS)

    Trivedi, D. K.; Galav, K. L.; Joshi, K. B.

    2018-04-01

    Predictive capabilities of ab-initio method are utilised to explore bands dispersion and charge transfer in β-BeH2. Investigations are carried out using the linear combination of atomic orbitals method at the level of density functional theory. The crystal structure and related parameters are settled by coupling total energy calculations with the Murnaghan equation of state. Electronic bands dispersion from PBE-GGA is reported. The PBE-GGA, and PBE0 hybrid functional, show that β-BeH2 is a direct gap semiconductor with 1.18 and 2.40 eV band gap. The band gap slowly decreases with pressure and beyond l00 GPa overlap of conduction and valence bands at the r point is observed. Charge transfer is studied by means of Mullikan population analysis.

  17. Numerical simulation of narrow bipolar electromagnetic pulses generated by thunderstorm discharges

    NASA Astrophysics Data System (ADS)

    Bochkov, E. I.; Babich, L. P.; Kutsyk, I. M.

    2013-07-01

    Using the concept of avalanche relativistic runaway electrons (REs), we perform numerical simulations of compact intracloud discharge (CID) as a generator of powerful natural electromagnetic pulses (EMPs) in the HF-VHF range, called narrow bipolar pulses (NBPs). For several values of the field overvoltage and altitude at which the discharge develops, the numbers of seed electrons initiating the avalanche are evaluated, with which the calculated EMP characteristics are consistent with the measured NBP parameters. We note shortcomings in the hypothesis assuming participation of cosmic ray air showers in avalanche initiation. The discharge capable of generating NBPs produces REs in numbers close to those in the source of terrestrial γ-ray flashes (TGFs), which can be an argument in favor of a unified NBP and TGF source.

  18. Accurate simulations of helium pick-up experiments using a rejection-free Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dutra, Matthew; Hinde, Robert

    2018-04-01

    In this paper, we present Monte Carlo simulations of helium droplet pick-up experiments with the intention of developing a robust and accurate theoretical approach for interpreting experimental helium droplet calorimetry data. Our approach is capable of capturing the evaporative behavior of helium droplets following dopant acquisition, allowing for a more realistic description of the pick-up process. Furthermore, we circumvent the traditional assumption of bulk helium behavior by utilizing density functional calculations of the size-dependent helium droplet chemical potential. The results of this new Monte Carlo technique are compared to commonly used Poisson pick-up statistics for simulations that reflect a broad range of experimental parameters. We conclude by offering an assessment of both of these theoretical approaches in the context of our observed results.

  19. Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation

    NASA Technical Reports Server (NTRS)

    Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.

  20. Novel Gemini cationic surfactants as anti-corrosion for X-65 steel dissolution in oilfield produced water under sweet conditions: Combined experimental and computational investigations

    NASA Astrophysics Data System (ADS)

    Migahed, M. A.; elgendy, Amr.; EL-Rabiei, M. M.; Nady, H.; Zaki, E. G.

    2018-05-01

    Two new sequences of Gemini di-quaternary ammonium salts were synthesized characterized by FTIR and 1HNMR spectroscopic techniques and evaluated as corrosion inhibitor for X-65 steel dissolution in deep oil wells formation water saturated with CO2. The anti-corrosion performance of these compounds was studied by different electrochemical techniques i.e. (potentiodynamic polarization and AC impedance methods), Surface morphology (SEM and EDX) analysis and quantum chemical calculations. Results showed that the synthesized compounds were of mixed-type inhibitors and the inhibition capability was influenced by the inhibitor dose and the spacer substitution in their structure as indicated by Tafel plots. Surface active parameters were determined from the surface tension profile. The synthesized compounds adsorbed via Langmuir adsorption model with physiochemical adsorption as inferred from the standard free energy (ΔG°ads) values. Surface morphology (SEM and EDX) data for inhibitor (II) shows the development of adsorbed film on steel specimen. Finally, the experimental results were supported by the quantum chemical calculations using DFT theory.

  1. One vs two primary LOX feedline configuration study for the National Launch System

    NASA Technical Reports Server (NTRS)

    Dill, K.; Davis, D.; Bates, R.; Tarwater, R.

    1992-01-01

    Six single LOX feedline designs were evaluated for use on the National Launch Vehicle. A single feedline design, designated the 'Spider', was chosen and compared to the baseline system. The baseline configuration employs two 20-inch I.D. lines, each supplying LOX to three 650,000 lbf thrust Space Transportation Main Engines. Five single feedline diameters were examined for the spider configuration; 22, 24, 26, 28, and 30-inch I.D. System dry weights and LOX residuals were estimated. These parameters, along with calculated staged mass for the different single line and baseline configurations, were used to calculate the payload mass to orbit. For the cases where LOX is drained to minimum NPSP conditions, none of the single lines performed as well as the dual line system, although the 22-inch diameter single line compared well. However, for the cases where LOX is drained to operating levels (LOX level at the booster and spider manifolds for the dual and single line configurations, respectively), the 22 - 26-inch I.D. single line systems show a greater payload capability.

  2. Intelligent Flow Friction Estimation

    PubMed Central

    Brkić, Dejan; Ćojbašić, Žarko

    2016-01-01

    Nowadays, the Colebrook equation is used as a mostly accepted relation for the calculation of fluid flow friction factor. However, the Colebrook equation is implicit with respect to the friction factor (λ). In the present study, a noniterative approach using Artificial Neural Network (ANN) was developed to calculate the friction factor. To configure the ANN model, the input parameters of the Reynolds Number (Re) and the relative roughness of pipe (ε/D) were transformed to logarithmic scales. The 90,000 sets of data were fed to the ANN model involving three layers: input, hidden, and output layers with, 2, 50, and 1 neurons, respectively. This configuration was capable of predicting the values of friction factor in the Colebrook equation for any given values of the Reynolds number (Re) and the relative roughness (ε/D) ranging between 5000 and 108 and between 10−7 and 0.1, respectively. The proposed ANN demonstrates the relative error up to 0.07% which had the high accuracy compared with the vast majority of the precise explicit approximations of the Colebrook equation. PMID:27127498

  3. Update on developments at SNIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacks, J., E-mail: jamie.zacks@ccfe.ac.uk; Turner, I.; Day, I.

    The Small Negative Ion Facility (SNIF) at CCFE has been undergoing continuous development and enhancement to both improve operational reliability and increase diagnostic capability. SNIF uses a CW 13.56MHz, 5kW RF driven volume source with a 30kV triode accelerator. Improvement and characterisation work includes: Installation of a new “L” type RF matching unit, used to calculate the load on the RF generator. Use of the electron suppressing biased insert as a Langmuir probe under different beam extraction conditions. Measurement of the hydrogen Fulcher molecular spectrum, used to calculate gas temperature in the source. Beam optimisation through parameter scans, using coppermore » target plate and visible cameras, with results compared with AXCEL-INP to provide beam current estimate. Modelling of the beam power density profile on the target plate using ANSYS to estimate beam power and provide another estimate of beam current. This work is described, and has allowed an estimation of the extracted beam current of approximately 6mA (4mA/cm2) at 3.5kW RF power and a source pressure of 0.6Pa.« less

  4. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; University of Missouri, Columbia, MO; Chen, H

    Purpose: Local noise power spectrum (NPS) properties are significantly affected by calculation variables and CT acquisition and reconstruction parameters, but a thoughtful analysis of these effects is absent. In this study, we performed a complete analysis of the effects of calculation and imaging parameters on the NPS. Methods: The uniformity module of a Catphan phantom was scanned with a Philips Brilliance 64-slice CT simulator using various scanning protocols. Images were reconstructed using both FBP and iDose4 reconstruction algorithms. From these images, local NPS were calculated for regions of interest (ROI) of varying locations and sizes, using four image background removalmore » methods. Additionally, using a predetermined ground truth, NPS calculation accuracy for various calculation parameters was compared for computer simulated ROIs. A complete analysis of the effects of calculation, acquisition, and reconstruction parameters on the NPS was conducted. Results: The local NPS varied with ROI size and image background removal method, particularly at low spatial frequencies. The image subtraction method was the most accurate according to the computer simulation study, and was also the most effective at removing low frequency background components in the acquired data. However, first-order polynomial fitting using residual sum of squares and principle component analysis provided comparable accuracy under certain situations. Similar general trends were observed when comparing the NPS for FBP to that of iDose4 while varying other calculation and scanning parameters. However, while iDose4 reduces the noise magnitude compared to FBP, this reduction is spatial-frequency dependent, further affecting NPS variations at low spatial frequencies. Conclusion: The local NPS varies significantly depending on calculation parameters, image acquisition parameters, and reconstruction techniques. Appropriate local NPS calculation should be performed to capture spatial variations of noise; calculation methodology should be selected with consideration of image reconstruction effects and the desired purpose of CT simulation for radiotherapy tasks.« less

  6. Measurement and calculation of ternary oxide mixtures for thin films for ultra short pulse laser optics

    NASA Astrophysics Data System (ADS)

    Jupé, M.; Mende, M.; Kolleck, C.; Ristau, D.; Gallais, L.; Mangote, B.

    2011-12-01

    The femto-second technology gains of increasing importance in industrial applications. In this context, a new generation of compact and low cost laser sources has to be provided on a commercial basis. Typical pulse durations of these sources are specified in the range from a few hundred femtoup to some pico-seconds, and typical wavelengths are centered around 1030-1080nm. As a consequence, also the demands imposed on high power optical components for these laser sources are rapidly increasing, especially in respect to their power handling capability in the ultra-short pulse range. The present contribution is dedicated to some aspects for improving this quality parameter of optical coatings. The study is based on a set of hafnia and silica mixtures with different compositions and optical band gaps. This material combination displays under ultra-short pulse laser irradiation effects, which are typically for thermal processes. For instance, melting had been observed in the morphology of damaged sides. In this context, models for a prediction of the laser damage thresholds and scaling laws are scrutinized, and have been modified calculating the energy of the electron ensemble. Furthermore, a simple first order approach for the calculation of the temperature was included.

  7. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  8. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  9. [Principles of the EOS™ X-ray machine and its use in daily orthopedic practice].

    PubMed

    Illés, Tamás; Somoskeöy, Szabolcs

    2012-02-26

    The EOS™ X-ray machine, based on a Nobel prize-winning invention in Physics in the field of particle detection, is capable of simultaneously capturing biplanar X-ray images by slot scanning of the whole body in an upright, physiological load-bearing position, using ultra low radiation doses. The simultaneous capture of spatially calibrated anterioposterior and lateral images allows the performance of a three-dimensional (3D) surface reconstruction of the skeletal system by a special software. Parts of the skeletal system in X-ray images and 3D-reconstructed models appear in true 1:1 scale for size and volume, thus spinal and vertebral parameters, lower limb axis lengths and angles, as well as any relevant clinical parameters in orthopedic practice could be very precisely measured and calculated. Visualization of 3D reconstructed models in various views by the sterEOS 3D software enables the presentation of top view images, through which one can analyze the rotational conditions of lower limbs, joints and spine deformities in horizontal plane and this provides revolutionary novel possibilities in orthopedic surgery, especially in spine surgery.

  10. Fast Reactions of Aluminum and Explosive Decomposition Products in a Post-Detonation Environment

    NASA Astrophysics Data System (ADS)

    Tappan, Bryce; Manner, Virginia; Lloyd, Joseph; Pemberton, Steven; Explosives Applications; Special Projects Team

    2011-06-01

    In order to determine the reaction behavior of Al in HMX/cast-cured binder formulations shortly after the passage of the detonation, a series of cylinder tests was performed on formulations with varying amounts of 2 μm spherical Al as well as LiF (an inert surrogate for Al). In these studies, both detonation velocity and cylinder expansion velocity are measured in order to determine exactly how and when Al contributes to the explosive event, particularly in the presence of oxidizing/energetic binders. The U.S. Army ARDEC at Picatinny has recently coined the term ``combined effects explosives'' for these materials as they demonstrate both high metal pushing capability and high blast ability. This study is aimed at developing a fundamental understanding of the reaction of Al with explosives decomposition products, where both the detonation and post-detonation environment are analyzed. Reaction rates of Al metal are determined via comparison of predicted performance based on thermoequilibrium calculations. The JWL equation of state, detonation velocities, wall velocities, and parameters at the C-J plane are some of the parameters that will be discussed.

  11. Capability of applying morphometric parameters of relief in river basins for geomorphological zoning of a territory

    NASA Astrophysics Data System (ADS)

    Ivanov, M. A.; Yermolaev, O. P.

    2018-01-01

    Information about morphometric characteristics of relief is necessary for researches devoted to geographic characteristics of territory, its zoning, assessment of erosion processes, geoecological condition and others. For the Volga Federal District for the first time a spatial database of geomorphometric parameters 1: 200 000 scale was created, based on a river basin approach. Watersheds are used as a spatial units created by semi-automated method using the terrain and hydrological modeling techniques implemented in the TAS GIS and WhiteBox GIS. As input data DEMs SRTM and Aster GDEM and hydrographic network vectorized from topographic maps were used. Using DEM highlighted above for each river basin, basic morphometric relief characteristics such as mean height, slope steepness, slope length, height range, river network density and factor LS were calculated. Basins belonging to the geomorphological regions and landscape zones was determined, according to the map of geomorphological zoning and landscape map. Analysis of variance revealed a statistically significant relationship between these characteristics and geomorphological regions and landscape zones. Consequently, spatial trends of changes of analyzed morphometric characteristics were revealed.

  12. Computational screening of organic materials towards improved photovoltaic properties

    NASA Astrophysics Data System (ADS)

    Dai, Shuo; Olivares-Amaya, Roberto; Amador-Bedolla, Carlos; Aspuru-Guzik, Alan; Borunda, Mario

    2015-03-01

    The world today faces an energy crisis that is an obstruction to the development of the human civilization. One of the most promising solutions is solar energy harvested by economical solar cells. Being the third generation of solar cell materials, organic photovoltaic (OPV) materials is now under active development from both theoretical and experimental points of view. In this study, we constructed a parameter to select the desired molecules based on their optical spectra performance. We applied it to investigate a large collection of potential OPV materials, which were from the CEPDB database set up by the Harvard Clean Energy Project. Time dependent density functional theory (TD-DFT) modeling was used to calculate the absorption spectra of the molecules. Then based on the parameter, we screened out the top performing molecules for their potential OPV usage and suggested experimental efforts toward their synthesis. In addition, from those molecules, we summarized the functional groups that provided molecules certain spectrum capability. It is hoped that useful information could be mined out to provide hints to molecular design of OPV materials.

  13. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  14. Determination of the main parameters of the cyclone separator of the flue gas produced during the smelting of secondary aluminum

    NASA Astrophysics Data System (ADS)

    Matusov, Jozef; Gavlas, Stanislav

    2016-06-01

    One way how is possible to separate the solid particulate pollutants from the flue gas is use the cyclone separators. The cyclone separators are very frequently used separators due to the simplicity of their design and their low operating costs. Separation of pollutants in the form of solids is carried out using three types of forces: inertia force, centrifugal force, gravity force. The main advantage is that cyclone consist of the parts which are resistant to wear and have long life time, e.g. various rotating and sliding parts. Mostly are used as pre-separators, because they have low efficiency in the separation of small particles. Their function is to separate larger particles from the flue gases which are subsequently cleaned in the other device which is capable of removing particles smaller than 1 µm, which is limiting size of particle separation. The article will deal with the issue of calculating the basic dimensions and main parameters of the cyclone separator from flue gas produced during the smelting of secondary aluminum.

  15. Analytical Calculation of Sensing Parameters on Carbon Nanotube Based Gas Sensors

    PubMed Central

    Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi.; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh

    2014-01-01

    Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I–V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research. PMID:24658617

  16. Effect of frictional heating on radiative ferrofluid flow over a slendering stretching sheet with aligned magnetic field

    NASA Astrophysics Data System (ADS)

    Ramana Reddy, J. V.; Sugunamma, V.; Sandeep, N.

    2017-01-01

    The pivotal objective of this paper is to look into the flow of ferrofluids past a variable thickness surface with velocity slip. Magnetite (Fe3O4 nanoparticles are embedded to the regular fluid. The occurrence of frictional heating in the flow is also taken into account. So the flow equations will be coupled and nonlinear. These are remodelled into dimensionless form with the support of suitable transmutations. The solution of the transformed equations is determined with the support of an effective Runge-Kutta (RK)-based shooting technique. Ultimately, the effects of a few flow modulating quantities on fluid motion and heat transport were explored through plots which are procured using the MATLAB tool box. Owing to the engineering applications, we also calculated the friction factor and the heat transfer coefficient for the influencing parameters. The results are presented comparatively for both regular fluid (water) and water-based ferrofluid. This study enables us to deduce that inflation in the aligned angle or surface thickness reduces the fluid velocity. The radiation and dissipation parameters are capable of providing heat energy to the flow.

  17. Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches

    NASA Astrophysics Data System (ADS)

    Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano

    2014-12-01

    The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.

  18. Development of Bio-impedance Analyzer (BIA) for Body Fat Calculation

    NASA Astrophysics Data System (ADS)

    Riyadi, Munawar A.; Nugraha, A.; Santoso, M. B.; Septaditya, D.; Prakoso, T.

    2017-04-01

    Common weight scales cannot assess body composition or determine fat mass and fat-fress mass that make up the body weight. This research propose bio-impedance analysis (BIA) tool capable to body composition assessment. This tool uses four electrodes, two of which are used for 50 kHz sine wave current flow to the body and the rest are used to measure the voltage produced by the body for impedance analysis. Parameters such as height, weight, age, and gender are provided individually. These parameters together with impedance measurements are then in the process to produce a body fat percentage. The experimental result shows impressive repeatability for successive measurements (stdev ≤ 0.25% fat mass). Moreover, result on the hand to hand node scheme reveals average absolute difference of total subjects between two analyzer tools of 0.48% (fat mass) with maximum absolute discrepancy of 1.22% (fat mass). On the other hand, the relative error normalized to Omron’s HBF-306 as comparison tool reveals less than 2% relative error. As a result, the system performance offers good evaluation tool for fat mass in the body.

  19. The Hildebrand solubility parameters of ionic liquids-part 2.

    PubMed

    Marciniak, Andrzej

    2011-01-01

    The Hildebrand solubility parameters have been calculated for eight ionic liquids. Retention data from the inverse gas chromatography measurements of the activity coefficients at infinite dilution were used for the calculation. From the solubility parameters, the enthalpies of vaporization of ionic liquids were estimated. Results are compared with solubility parameters estimated by different methods.

  20. The Numerical Calculation and Experimental Measurement of the Inductance Parameters for Permanent Magnet Synchronous Motor in Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Qiao, Mingzhong; Zhu, Peng

    2017-12-01

    A permanent magnet synchronous motor with radial magnetic circuit and built-in permanent magnet is designed for the electric vehicle. Finite element numerical calculation and experimental measurement are adopted to obtain the direct axis and quadrature axis inductance parameters of the motor which are vital important for the motor control. The calculation method is simple, the measuring principle is clear, the results of numerical calculation and experimental measurement are mutual confirmation. A quick and effective method is provided to obtain the direct axis and quadrature axis inductance parameters of the motor, and then improve the design of motor or adjust the control parameters of the motor controller.

  1. Simulation and analysis of tape spring for deployed space structures

    NASA Astrophysics Data System (ADS)

    Chang, Wei; Cao, DongJing; Lian, MinLong

    2018-03-01

    The tape spring belongs to the configuration of ringent cylinder shell, and the mechanical properties of the structure are significantly affected by the change of geometrical parameters. There are few studies on the influence of geometrical parameters on the mechanical properties of the tape spring. The bending process of the single tape spring was simulated based on simulation software. The variations of critical moment, unfolding moment, and maximum strain energy in the bending process were investigated, and the effects of different radius angles of section and thickness and length on driving capability of the simple tape spring was studied by using these parameters. Results show that the driving capability and resisting disturbance capacity grow with the increase of radius angle of section in the bending process of the single tape spring. On the other hand, these capabilities decrease with increasing length of the single tape spring. In the end, the driving capability and resisting disturbance capacity grow with the increase of thickness in the bending process of the single tape spring. The research has a certain reference value for improving the kinematic accuracy and reliability of deployable structures.

  2. Assessment of the Effects of Entrainment and Wind Shear on Nuclear Cloud Rise Modeling

    NASA Astrophysics Data System (ADS)

    Zalewski, Daniel; Jodoin, Vincent

    2001-04-01

    Accurate modeling of nuclear cloud rise is critical in hazard prediction following a nuclear detonation. This thesis recommends improvements to the model currently used by DOD. It considers a single-term versus a three-term entrainment equation, the value of the entrainment and eddy viscous drag parameters, as well as the effect of wind shear in the cloud rise following a nuclear detonation. It examines departures from the 1979 version of the Department of Defense Land Fallout Interpretive Code (DELFIC) with the current code used in the Hazard Prediction and Assessment Capability (HPAC) code version 3.2. The recommendation for a single-term entrainment equation, with constant value parameters, without wind shear corrections, and without cloud oscillations is based on both a statistical analysis using 67 U.S. nuclear atmospheric test shots and the physical representation of the modeling. The statistical analysis optimized the parameter values of interest for four cases: the three-term entrainment equation with wind shear and without wind shear as well as the single-term entrainment equation with and without wind shear. The thesis then examines the effect of cloud oscillations as a significant departure in the code. Modifications to user input atmospheric tables are identified as a potential problem in the calculation of stabilized cloud dimensions in HPAC.

  3. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  4. Computer-aided mathematical analysis of probability of intercept for ground-based communication intercept system

    NASA Astrophysics Data System (ADS)

    Park, Sang Chul

    1989-09-01

    We develop a mathematical analysis model to calculate the probability of intercept (POI) for the ground-based communication intercept (COMINT) system. The POI is a measure of the effectiveness of the intercept system. We define the POI as the product of the probability of detection and the probability of coincidence. The probability of detection is a measure of the receiver's capability to detect a signal in the presence of noise. The probability of coincidence is the probability that an intercept system is available, actively listening in the proper frequency band, in the right direction and at the same time that the signal is received. We investigate the behavior of the POI with respect to the observation time, the separation distance, antenna elevations, the frequency of the signal, and the receiver bandwidths. We observe that the coincidence characteristic between the receiver scanning parameters and the signal parameters is the key factor to determine the time to obtain a given POI. This model can be used to find the optimal parameter combination to maximize the POI in a given scenario. We expand this model to a multiple system. This analysis is conducted on a personal computer to provide the portability. The model is also flexible and can be easily implemented under different situations.

  5. A Performance Map for Ideal Air Breathing Pulse Detonation Engines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2001-01-01

    The performance of an ideal, air breathing Pulse Detonation Engine is described in a manner that is useful for application studies (e.g., as a stand-alone, propulsion system, in combined cycles, or in hybrid turbomachinery cycles). It is shown that the Pulse Detonation Engine may be characterized by an averaged total pressure ratio, which is a unique function of the inlet temperature, the fraction of the inlet flow containing a reacting mixture, and the stoichiometry of the mixture. The inlet temperature and stoichiometry (equivalence ratio) may in turn be combined to form a nondimensional heat addition parameter. For each value of this parameter, the average total enthalpy ratio and total pressure ratio across the device are functions of only the reactant fill fraction. Performance over the entire operating envelope can thus be presented on a single plot of total pressure ratio versus total enthalpy ratio for families of the heat addition parameter. Total pressure ratios are derived from thrust calculations obtained from an experimentally validated, reactive Euler code capable of computing complete Pulse Detonation Engine limit cycles. Results are presented which demonstrate the utility of the described method for assessing performance of the Pulse Detonation Engine in several potential applications. Limitations and assumptions of the analysis are discussed. Details of the particular detonative cycle used for the computations are described.

  6. Preparation of AgInS2 nanoparticles by a facile microwave heating technique; study of effective parameters, optical and photovoltaic characteristics

    NASA Astrophysics Data System (ADS)

    Tadjarodi, Azadeh; Cheshmekhavar, Amir Hossein; Imani, Mina

    2012-12-01

    In this work, AgInS2 (AIS) semiconductor nanoparticles were synthesized by an efficient and facile microwave heating technique using several sulfur sources and solvents in the different reaction times. The SEM images presented the particle morphology for all of the obtained products in the arranged reaction conditions. The particle size of 70 nm was obtained using thioacetamide (TAA), ethylene glycol (EG) as the sulfur source and solvent, respectively at the reaction time of 5 min. It was found that the change of the mentioned parameters lead to alter on the particle size of the resulting products. The average particle size was estimated using a microstructure measurement program and Minitab statistical software. The optical band gap energy of 1.96 eV for the synthesized AIS nanoparticles was determined by the diffuse reflectance spectroscopy (DRS). AgInS2/CdS/CuInSe2 heterojunction solar cell was constructed and photovoltaic parameters, i.e., open-circuit voltage (Voc), short-circuit current (Jsc) and fill factor (FF) were estimated by photocurrent-voltage (I-V) curve. The calculated fill factor of 30% and energy conversion efficiency of 1.58% revealed the capability of AIS nanoparticles to use in the solar cell devices.

  7. Importance of dispersion and electron correlation in ab initio protein folding.

    PubMed

    He, Xiao; Fusti-Molnar, Laszlo; Cui, Guanglei; Merz, Kenneth M

    2009-04-16

    Dispersion is well-known to be important in biological systems, but the effect of electron correlation in such systems remains unclear. In order to assess the relationship between the structure of a protein and its electron correlation energy, we employed both full system Hartree-Fock (HF) and second-order Møller-Plesset perturbation (MP2) calculations in conjunction with the Polarizable Continuum Model (PCM) on the native structures of two proteins and their corresponding computer-generated decoy sets. Because of the expense of the MP2 calculation, we have utilized the fragment molecular orbital method (FMO) in this study. We show that the sum of the Hartree-Fock (HF) energy and force field (LJ6)-derived dispersion energy (HF + LJ6) is well correlated with the energies obtained using second-order Møller-Plesset perturbation (MP2) theory. In one of the two examples studied, the correlation energy as well as the empirical dispersive energy term was able to discriminate between native and decoy structures. On the other hand, for the second protein we studied, neither the correlation energy nor dispersion energy showed discrimination capabilities; however, the ab initio MP2 energy and the HF+LJ6 both ranked the native structure correctly. Furthermore, when we randomly scrambled the Lennard-Jones parameters, the correlation between the MP2 energy and the sum of the HF energy and dispersive energy (HF+LJ6) significantly drops, which indicates that the choice of Lennard-Jones parameters is important.

  8. NMR crystallography of α-poly(L-lactide).

    PubMed

    Pawlak, Tomasz; Jaworska, Magdalena; Potrzebowski, Marek J

    2013-03-07

    A complementary approach that combines NMR measurements, analysis of X-ray and neutron powder diffraction data and advanced quantum mechanical calculations was employed to study the α-polymorph of L-polylactide. Such a strategy, which is known as NMR crystallography, to the best of our knowledge, is used here for the first time for the fine refinement of the crystal structure of a synthetic polymer. The GIPAW method was used to compute the NMR shielding parameters for the different models, which included the α-PLLA structure obtained by 2-dimensional wide-angle X-ray diffraction (WAXD) at -150 °C (model M1) and at 25 °C (model M2), neutron diffraction (WAND) measurements (model M3) and the fully optimized geometry of the PLLA chains in the unit cell with defined size (model M4). The influence of changes in the chain conformation on the (13)C σ(ii) NMR shielding parameters is shown. The correlation between the σ(ii) and δ(ii) values for the M1-M4 models revealed that the M4 model provided the best fit. Moreover, a comparison of the experimental (13)C NMR spectra with the spectra calculated using the M1-M4 models strongly supports the data for the M4 model. The GIPAW method, via verification using NMR measurements, was shown to be capable of the fine refinement of the crystal structures of polymers when coarse X-ray diffraction data for powdered samples are available.

  9. Seismic analysis for translational failure of landfills with retaining walls.

    PubMed

    Feng, Shi-Jin; Gao, Li-Ya

    2010-11-01

    In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. User's guide to PHREEQC, a computer program for speciation, reaction-path, advective-transport, and inverse geochemical calculations

    USGS Publications Warehouse

    Parkhurst, D.L.

    1995-01-01

    PHREEQC is a computer program written in the C pwgranuning language that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations, (2) reaction-path and advective-transport calculations involving specified irreversible reactions, mixing of solutions, mineral and gas equilibria surface-complex-ation reactions, and ion-exchange reactions, and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for composition differences between waters, within specified compositional uncertainties. PHREEQC is derived from the Fortran program PHREEQE, but it has been completely rewritten in C with the addition many new capabilities. New features include the capabilities to use redox couples to distribute redox elements among their valence states in speciation calculations; to model ion-exchange and surface-compiexation reactions; to model reactions with a fixed-pressure, multicomponent gas phase (that is, a gas bubble); to calculate the mass of water in the aqueous phase during reaction and transport calculations; to keep track of the moles of minerals present in the solid phases and determine antomaticaHy the thermodynamically stable phase assemblage; to simulate advective transport in combination with PHREEQC's reaction-modeling capability; and to make inverse modeling calculations that allow for uncertainties in the analytical data. The user interface is improved through the use of a simplified approach to redox reactions, which includes explicit mole-balance equations for hydrogen and oxygen; the use of a revised input that is modular and completely free format; and the use of mineral names and standard chemical symbolism rather than index numbers. The use of (2 eliminates nearly all limitations on army sizes, including numbers of elements, aqueous species, solutions, phases, and lengths of character strings. A new equation solver that optimizes a set of equalities subject to both equality and inequality constraints is used to determine the thermodynamically stable set of phases in equilibrium with a solution. A more complete Newton-Raphson formulation, master-species switching, and scaling of the algebraic equations reduce the number of failures of the nunmrical method in PHREEQC relative to PHREEQE. This report presents the equations that are the basis for chemical equilibrium and inverse-modeling calculations in PHREEQC, describes the input for the program, and presents twelve examples that demonstrate most of the program's capabilities.

  11. Relationship between electronic properties and drug activity of seven quinoxaline compounds: A DFT study

    NASA Astrophysics Data System (ADS)

    Behzadi, Hadi; Roonasi, Payman; Assle taghipour, Khatoon; van der Spoel, David; Manzetti, Sergio

    2015-07-01

    The quantum chemical calculations at the DFT/B3LYP level of theory were carried out on seven quinoxaline compounds, which have been synthesized as anti-Mycobacterium tuberculosis agents. Three conformers were optimized for each compound and the lowest energy structure was found and used in further calculations. The electronic properties including EHOMO, ELUMO and related parameters as well as electron density around oxygen and nitrogen atoms were calculated for each compound. The relationship between the calculated electronic parameters and biological activity of the studied compounds were investigated. Six similar quinoxaline derivatives with possible more drug activity were suggested based on the calculated electronic descriptors. A mechanism was proposed and discussed based on the calculated electronic parameters and bond dissociation energies.

  12. High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations

    NASA Astrophysics Data System (ADS)

    Neal, William; Garasi, Christopher

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  13. The Hildebrand Solubility Parameters of Ionic Liquids—Part 2

    PubMed Central

    Marciniak, Andrzej

    2011-01-01

    The Hildebrand solubility parameters have been calculated for eight ionic liquids. Retention data from the inverse gas chromatography measurements of the activity coefficients at infinite dilution were used for the calculation. From the solubility parameters, the enthalpies of vaporization of ionic liquids were estimated. Results are compared with solubility parameters estimated by different methods. PMID:21747694

  14. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Jung, Yeon Sang; Liu, Yuxuan

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shieldedmore » cross sections.« less

  15. Properties of C4F7N–CO2 thermal plasmas: thermodynamic properties, transport coefficients and emission coefficients

    NASA Astrophysics Data System (ADS)

    Wu, Yi; Wang, Chunlin; Sun, Hao; Murphy, Anthony B.; Rong, Mingzhe; Yang, Fei; Chen, Zhexin; Niu, Chunpin; Wang, Xiaohua

    2018-04-01

    The thermophysical properties, including composition, thermodynamic properties, transport coefficients and net emission coefficients, of thermal plasmas formed from pure iso-C4 perfluoronitrile C4F7N and C4F7N–CO2 mixtures are calculated for temperatures from 300 to 30 000 K and pressures from 0.1 to 20 atm. These gases have received much attention as alternatives to SF6 for use in circuit breakers, due to the low global warming potential and good dielectric properties of C4F7N. Since the parameters of the large molecules formed in the dissociation of C4F7N are unavailable, the partition function and enthalpy of formation were calculated using computational chemistry methods. From the equilibrium composition calculations, it was found that when C4F7N is mixed with CO2, CO2 can capture C atoms from C4F7N, producing CO, since the system consisting of small molecules such as CF4 and CO has lower energy at room temperature. This is in agreement with previous experimental results, which show that CO dominates the decomposition products of C4F7N–CO2 mixtures; it could limit the repeated breaking performance of C4F7N. From the point of view of chemical stability, the mixing ratio of CO2 should therefore be chosen carefully. Through comparison with common arc quenching gases (including SF6, CF3I and C5F10O), it is found that for the temperature range for which electrical conductivity remains low, pure C4F7N has similar ρC p (product of mass density and specific heat) properties to SF6, and higher radiative emission coefficient, properties that are correlated with good arc extinguishing capability. For C4F7N–CO2 mixtures, the electrical conductivity is very close to that of SF6 while the ρC p peak at 7000 K caused by decomposition of CO implies inferior interruption capability to that of SF6. The calculated properties will be useful in arc simulations.

  16. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  17. Combined Delivery of Consolidating Pulps to the Remote Sites of Deposits

    NASA Astrophysics Data System (ADS)

    Golik, V. I.; Efremenkov, A. B.

    2017-07-01

    The problems of modern mining production include limitation of the scope of application of environmental and resource-saving technologies with application of consolidating pulps when developing the sites of the ore field remote from the stowing complexes which leads to the significant reduction of the performance indicators of underground mining of metallic ores. Experimental approach to the problem solution is characterized by the proof of technological capability and efficiency of the combined vibration-pneumatic-gravity-flowing method of pulps delivery at the distance exceeding the capacity of current delivery methods as it studies the vibration phenomenon in industrial special structure pipeline. The results of the full-scale experiment confirm the theoretical calculations of the capability of consolidating stowing delivery of common composition at the distance exceeding the capacity of usual pneumatic-gravity-flowing delivery method due to reduction of the friction-induced resistance of the consolidating stowing to the movement along the pipeline. The parameters of the interaction of the consolidating stowing components improve in the process of its delivery via the pipeline resulting in the stowing strength increase, completeness of subsurface use improves, the land is saved for agricultural application and the environmental stress is relieved.

  18. Sunspot: A program to model the behavior of hypervelocity impact damaged multilayer insulation in the Sunspot thermal vacuum chamber of Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Rule, W. K.; Hayashida, K. B.

    1992-01-01

    The development of a computer program to predict the degradation of the insulating capabilities of the multilayer insulation (MLI) blanket of Space Station Freedom due to a hypervelocity impact with a space debris particle is described. A finite difference scheme is used for the calculations. The computer program was written in Microsoft BASIC. Also described is a test program that was undertaken to validate the numerical model. Twelve MLI specimens were impacted at hypervelocities with simulated debris particles using a light gas gun at Marshall Space Flight Center. The impact-damaged MLI specimens were then tested for insulating capability in the space environment of the Sunspot thermal vacuum chamber at MSFC. Two undamaged MLI specimens were also tested for comparison with the test results of the damaged specimens. The numerical model was found to adequately predict behavior of the MLI specimens in the Sunspot chamber. A parameter, called diameter ratio, was developed to relate the nominal MLI impact damage to the apparent (for thermal analysis purposes) impact damage based on the hypervelocity impact conditions of a specimen.

  19. Yong-Ki Kim — His Life and Recent Work

    NASA Astrophysics Data System (ADS)

    Stone, Philip M.

    2007-08-01

    Dr. Kim made internationally recognized contributions in many areas of atomic physics research and applications, and was still very active when he was killed in an automobile accident. He joined NIST in 1983 after 17 years at the Argonne National Laboratory following his Ph.D. work at the University of Chicago. Much of his early work at Argonne and especially at NIST was the elucidation and detailed analysis of the structure of highly charged ions. He developed a sophisticated, fully relativistic atomic structure theory that accurately predicts atomic energy levels, transition wavelengths, lifetimes, and transition probabilities for a large number of ions. This information has been vital to model the properties of the hot interior of fusion research plasmas, where atomic ions must be described with relativistic atomic structure calculations. In recent years, Dr. Kim worked on the precise calculation of ionization and excitation cross sections of numerous atoms, ions, and molecules that are important in fusion research and in plasma processing for manufacturing semiconductor chips. Dr. Kim greatly advanced the state-of-the-art of calculations for these cross sections through development and implementation of highly innovative methods, including his Binary-Encounter-Bethe (BEB) theory and a scaled plane wave Born (scaled PWB) theory. His methods, using closed quantum mechanical formulas and no adjustable parameters, avoid tedious large-scale computations with main-frame computers. His calculations closely reproduce the results of benchmark experiments as well as large-scale calculations requiring hours of computer time. This recent work on BEB and scaled PWB is reviewed and examples of its capabilities are shown.

  20. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Shantaram, S. Pai; Gyekenyesi, John P.

    1989-01-01

    The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  1. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  2. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  3. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkan, Nureddin

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less

  4. Power capability evaluation for lithium iron phosphate batteries based on multi-parameter constraints estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang

    2018-01-01

    The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.

  5. System and method for calibrating a rotary absolute position sensor

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Permenter, Frank Noble (Inventor); Radford, Nicolaus A (Inventor)

    2012-01-01

    A system includes a rotary device, a rotary absolute position (RAP) sensor generating encoded pairs of voltage signals describing positional data of the rotary device, a host machine, and an algorithm. The algorithm calculates calibration parameters usable to determine an absolute position of the rotary device using the encoded pairs, and is adapted for linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters. A method of calibrating the RAP sensor includes measuring the rotary position as encoded pairs of voltage signals, linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters, and calculating an absolute position of the rotary device using the calibration parameters. The calibration parameters include a positive definite matrix (A) and a center point (q) of the ellipse. The voltage signals may include an encoded sine and cosine of a rotary angle of the rotary device.

  6. The Solubility Parameters of Ionic Liquids

    PubMed Central

    Marciniak, Andrzej

    2010-01-01

    The Hildebrand’s solubility parameters have been calculated for 18 ionic liquids from the inverse gas chromatography measurements of the activity coefficients at infinite dilution. Retention data were used for the calculation. The solubility parameters are helpful for the prediction of the solubility in the binary solvent mixtures. From the solubility parameters, the standard enthalpies of vaporization of ionic liquids were estimated. PMID:20559495

  7. Cloud Inhomogeneity from MODIS

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Cahalan, Robert F.

    2004-01-01

    Two full months (July 2003 and January 2004) of MODIS Atmosphere Level-3 data from the Terra and Aqua satellites are analyzed in order to characterize the horizontal variability of cloud optical thickness and water path at global scales. Various options to derive cloud variability parameters are discussed. The climatology of cloud inhomogeneity is built by first calculating daily parameter values at spatial scales of l degree x 1 degree, and then at zonal and global scales, followed by averaging over monthly time scales. Geographical, diurnal, and seasonal changes of inhomogeneity parameters are examined separately for the two cloud phases, and separately over land and ocean. We find that cloud inhomogeneity is weaker in summer than in winter, weaker over land than ocean for liquid clouds, weaker for local morning than local afternoon, about the same for liquid and ice clouds on a global scale, but with wider probability distribution functions (PDFs) and larger latitudinal variations for ice, and relatively insensitive to whether water path or optical thickness products are used. Typical mean values at hemispheric and global scales of the inhomogeneity parameter nu (roughly the mean over the standard deviation of water path or optical thickness), range from approximately 2.5 to 3, while for the inhomogeneity parameter chi (the ratio of the logarithmic to linear mean) from approximately 0.7 to 0.8. Values of chi for zonal averages can occasionally fall below 0.6 and for individual gridpoints below 0.5. Our results demonstrate that MODIS is capable of revealing significant fluctuations in cloud horizontal inhomogenity and stress the need to model their global radiative effect in future studies.

  8. Application of troposphere model from NWP and GNSS data into real-time precise positioning

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw

    2016-04-01

    The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.

  9. Power extraction calculation improvement when local parameters are included

    NASA Astrophysics Data System (ADS)

    Flores-Mateos, L. M.; Hartnett, M.

    2016-02-01

    The improvement of the tidal resource assessment will be studied by comparing two approaches in a two-dimensional, finite difference, hydrodynamic model DIVAST-ADI; in a channel of non-varying cross-sectional area that connects two large basins. The first strategy, considers a constant trust coefficient; the second one, use the local field parameters around the turbine. These parameters are obtained after applying the open channel theory in the tidal stream and after considering the turbine as a linear momentum actuator disk. The parameters correspond to the upstream and downstream, with respect to the turbine, speeds and depths; also the blockage ratio, the wake velocity and the bypass coefficients and they have already been incorporated in the model. The figure (a) shows the numerical configuration at high tide developed with DIVAST-ADI. The experiment undertakes two open boundary conditions. The first one is a sinusoidal forcing introduced as a water level located at (I, J=1) and the second one, indicate that a zero velocity and a constant water depth were kept (I, J=362); when the turbine is introduced it is placed in the middle of the channel (I=161, J=181). The influence of the turbine in the velocity and elevation around the turbine region is evident; figure (b) and (c) shows that the turbine produces a discontinuity in the depth and velocity profile, when we plot a transect along the channel. Finally, the configuration implemented reproduced with satisfactory accuracy the quasi-steady flow condition, even without presenting shock-capturing capability. Also, the range of the parameters 0.01<α 4<0.55, $0

  10. A physiology-based model describing heterogeneity in glucose metabolism: the core of the Eindhoven Diabetes Education Simulator (E-DES).

    PubMed

    Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W

    2015-03-01

    Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.

  11. Molecular dynamics simulations of fluid cyclopropane with MP2/CBS-fitted intermolecular interaction potentials

    NASA Astrophysics Data System (ADS)

    Ho, Yen-Ching; Wang, Yi-Siang; Chao, Sheng D.

    2017-08-01

    Modeling fluid cycloalkanes with molecular dynamics simulations has proven to be a very challenging task partly because of lacking a reliable force field based on quantum chemistry calculations. In this paper, we construct an ab initio force field for fluid cyclopropane using the second-order Møller-Plesset perturbation theory. We consider 15 conformers of the cyclopropane dimer for the orientation sampling. Single-point energies at important geometries are calibrated by the coupled cluster with single, double, and perturbative triple excitation method. Dunning's correlation consistent basis sets (up to aug-cc-pVTZ) are used in extrapolating the interaction energies at the complete basis set limit. The force field parameters in a 9-site Lennard-Jones model are regressed by the calculated interaction energies without using empirical data. With this ab initio force field, we perform molecular dynamics simulations of fluid cyclopropane and calculate both the structural and dynamical properties. We compare the simulation results with those using an empirical force field and obtain a quantitative agreement for the detailed atom-wise radial distribution functions. The experimentally observed gross radial distribution function (extracted from the neutron scattering measurements) is well reproduced in our simulation. Moreover, the calculated self-diffusion coefficients and shear viscosities are in good agreement with the experimental data over a wide range of thermodynamic conditions. To the best of our knowledge, this is the first ab initio force field which is capable of competing with empirical force fields for simulating fluid cyclopropane.

  12. Magnetic Exchange Couplings from Semilocal Functionals Evaluated Nonself-Consistently on Hybrid Densities: Insights on Relative Importance of Exchange, Correlation, and Delocalization.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2012-09-11

    Semilocal functionals generally yield poor magnetic exchange couplings for transition-metal complexes, typically overpredicting in magnitude the experimental values. Here we show that semilocal functionals evaluated nonself-consistently on densities from hybrid functionals can yield magnetic exchange couplings that are greatly improved with respect to their self-consistent semilocal values. Furthermore, when semilocal functionals are evaluated nonself-consistently on densities from a "half-and-half" hybrid, their errors with respect to experimental values can actually be lower than those from self-consistent calculations with standard hybrid functionals such as PBEh or TPSSh. This illustrates that despite their notoriously poor performance for exchange couplings, for many systems semilocal functionals are capable of delivering accurate relative energies for magnetic states provided that their electron delocalization error is corrected. However, while self-consistent calculations with hybrids uniformly improve results for all complexes, evaluating nonself-consistently with semilocal functionals does not give a balanced improvement for both ferro- and antiferromagnetically coupled complexes, indicating that there is more at play with the overestimation problem than simply the delocalization error. Additionally, we show that for some systems the conventional wisdom of choice of exchange functional mattering more than correlation does not hold. This combined with results from the nonself-consistent calculations provide insight on clarifying the relative roles of exchange, correlation, and delocalization in calculating magnetic exchange coupling parameters in Kohn-Sham Density Functional Theory.

  13. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    PubMed Central

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  14. Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach

    NASA Astrophysics Data System (ADS)

    Yan, Wen; Shelley, Michael

    2018-02-01

    An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.

  15. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  16. Analysis of Pull-In Instability of Geometrically Nonlinear Microbeam Using Radial Basis Artificial Neural Network Based on Couple Stress Theory

    PubMed Central

    Heidari, Mohammad; Heidari, Ali; Homaei, Hadi

    2014-01-01

    The static pull-in instability of beam-type microelectromechanical systems (MEMS) is theoretically investigated. Two engineering cases including cantilever and double cantilever microbeam are considered. Considering the midplane stretching as the source of the nonlinearity in the beam behavior, a nonlinear size-dependent Euler-Bernoulli beam model is used based on a modified couple stress theory, capable of capturing the size effect. By selecting a range of geometric parameters such as beam lengths, width, thickness, gaps, and size effect, we identify the static pull-in instability voltage. A MAPLE package is employed to solve the nonlinear differential governing equations to obtain the static pull-in instability voltage of microbeams. Radial basis function artificial neural network with two functions has been used for modeling the static pull-in instability of microcantilever beam. The network has four inputs of length, width, gap, and the ratio of height to scale parameter of beam as the independent process variables, and the output is static pull-in voltage of microbeam. Numerical data, employed for training the network, and capabilities of the model have been verified in predicting the pull-in instability behavior. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 4.55% in predicting pull-in voltage of cantilever microbeam. Further analysis of pull-in instability of beam under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach. The results reveal significant influences of size effect and geometric parameters on the static pull-in instability voltage of MEMS. PMID:24860602

  17. Reference intervals for 24 laboratory parameters determined in 24-hour urine collections.

    PubMed

    Curcio, Raffaele; Stettler, Helen; Suter, Paolo M; Aksözen, Jasmin Barman; Saleh, Lanja; Spanaus, Katharina; Bochud, Murielle; Minder, Elisabeth; von Eckardstein, Arnold

    2016-01-01

    Reference intervals for many laboratory parameters determined in 24-h urine collections are either not publicly available or based on small numbers, not sex specific or not from a representative sample. Osmolality and concentrations or enzymatic activities of sodium, potassium, chloride, glucose, creatinine, citrate, cortisol, pancreatic α-amylase, total protein, albumin, transferrin, immunoglobulin G, α1-microglobulin, α2-macroglobulin, as well as porphyrins and their precursors (δ-aminolevulinic acid and porphobilinogen) were determined in 241 24-h urine samples of a population-based cohort of asymptomatic adults (121 men and 120 women). For 16 of these 24 parameters creatinine-normalized ratios were calculated based on 24-h urine creatinine. The reference intervals for these parameters were calculated according to the CLSI C28-A3 statistical guidelines. By contrast to most published reference intervals, which do not stratify for sex, reference intervals of 12 of 24 laboratory parameters in 24-h urine collections and of eight of 16 parameters as creatinine-normalized ratios differed significantly between men and women. For six parameters calculated as 24-h urine excretion and four parameters calculated as creatinine-normalized ratios no reference intervals had been published before. For some parameters we found significant and relevant deviations from previously reported reference intervals, most notably for 24-h urine cortisol in women. Ten 24-h urine parameters showed weak or moderate sex-specific correlations with age. By applying up-to-date analytical methods and clinical chemistry analyzers to 24-h urine collections from a large population-based cohort we provide as yet the most comprehensive set of sex-specific reference intervals calculated according to CLSI guidelines for parameters determined in 24-h urine collections.

  18. Impact of process parameters on the breakage kinetics of poorly water-soluble drugs during wet stirred media milling: a microhydrodynamic view.

    PubMed

    Afolabi, Afolawemi; Akinlabi, Olakemi; Bilgili, Ecevit

    2014-01-23

    Wet stirred media milling has proven to be a robust process for producing nanoparticle suspensions of poorly water-soluble drugs. As the process is expensive and energy-intensive, it is important to study the breakage kinetics, which determines the cycle time and production rate for a desired fineness. Although the impact of process parameters on the properties of final product suspensions has been investigated, scant information is available regarding their impact on the breakage kinetics. Here, we elucidate the impact of stirrer speed, bead concentration, and drug loading on the breakage kinetics via a microhydrodynamic model for the bead-bead collisions. Suspensions of griseofulvin, a model poorly water-soluble drug, were prepared in the presence of two stabilizers: hydroxypropyl cellulose and sodium dodecyl sulfate. Laser diffraction, scanning electron microscopy, and rheometry were used to characterize them. Various microhydrodynamic parameters including a newly defined milling intensity factor was calculated. An increase in either the stirrer speed or the bead concentration led to an increase in the specific energy and the milling intensity factor, consequently faster breakage. On the other hand, an increase in the drug loading led to a decrease in these parameters and consequently slower breakage. While all microhydrodynamic parameters provided significant physical insight, only the milling intensity factor was capable of explaining the influence of all parameters directly through its strong correlation with the process time constant. Besides guiding process optimization, the analysis rationalizes the preparation of a single high drug-loaded batch (20% or higher) instead of multiple dilute batches. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Concurrent validity and reliability of wireless instrumented insoles measuring postural balance and temporal gait parameters.

    PubMed

    Oerbekke, Michiel S; Stukstette, Mirelle J; Schütte, Kurt; de Bie, Rob A; Pisters, Martijn F; Vanwanseele, Benedicte

    2017-01-01

    The OpenGo seems promising to take gait analysis out of laboratory settings due to its capability of long-term measurements and mobility. However, the OpenGo's concurrent validity and reliability need to be assessed to determine if the instrument is suitable for validation in patient samples. Twenty healthy volunteers participated. Center of pressure data were collected under eyes open and closed conditions with participants performing unilateral stance trials on the gold standard (AMTI OR6-7 force plate) while wearing the OpenGo. Temporal gait data (stance time, gait cycle time, and cadence) were collected at a self-selected comfortable walking speed with participants performing test-retest trials on an instrumented treadmill while wearing the OpenGo. Validity was assessed using Bland-Altman plots. Reliability was assessed with Intraclass Correlation Coefficient (2,1) and smallest detectable changes were calculated. Negative means of differences were found in all measured parameters, illustrating lower scores for the OpenGo on average. The OpenGo showed negative upper limits of agreement in center of pressure parameters on the mediolateral axis. Temporal reliability ICCs ranged from 0.90-0.93. Smallest detectable changes for both stance times were 0.04 (left) and 0.05 (right) seconds, for gait cycle time 0.08s, and for cadence 4.5 steps per minute. The OpenGo is valid and reliable for the measurement of temporal gait parameters during walking. Measurements of center of pressure parameters during unilateral stance are not considered valid. The OpenGo seems a promising instrument for clinically screening and monitoring temporal gait parameters in patients, however validation in patient populations is needed. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Statistical evaluation of stability data: criteria for change-over-time and data variability.

    PubMed

    Bar, Raphael

    2003-01-01

    In a recently issued ICH Q1E guidance on evaluation of stability data of drug substances and products, the need to perform a statistical extrapolation of a shelf-life of a drug product or a retest period for a drug substance is based heavily on whether data exhibit a change-over-time and/or variability. However, this document suggests neither measures nor acceptance criteria of these two parameters. This paper demonstrates a useful application of simple statistical parameters for determining whether sets of stability data from either accelerated or long-term storage programs exhibit a change-over-time and/or variability. These parameters are all derived from a simple linear regression analysis first performed on the stability data. The p-value of the slope of the regression line is taken as a measure for change-over-time, and a value of 0.25 is suggested as a limit to insignificant change of the quantitative stability attributes monitored. The minimal process capability index, Cpk, calculated from the standard deviation of the regression line, is suggested as a measure for variability with a value of 2.5 as a limit for an insignificant variability. The usefulness of the above two parameters, p-value and Cpk, was demonstrated on stability data of a refrigerated drug product and on pooled data of three batches of a drug substance. In both cases, the determined parameters allowed characterization of the data in terms of change-over-time and variability. Consequently, complete evaluation of the stability data could be pursued according to the ICH guidance. It is believed that the application of the above two parameters with their acceptance criteria will allow a more unified evaluation of stability data.

  1. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  2. Bushland Reference ET Calculator with QA/QC capabilities and iPhone/iPad application

    USDA-ARS?s Scientific Manuscript database

    Accurate daily reference evapotranspiration (ET) values are needed to estimate crop water demand for irrigation management and hydrologic modeling purposes. The USDA-ARS Conservation and Production Research Laboratory at Bushland, Texas developed the Bushland Reference ET (BET) Calculator for calcul...

  3. Experimental validation of a new heterogeneous mechanical test design

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  4. TU-D-209-05: Automatic Calculation of Organ and Effective Dose for CBCT and Interventional Fluoroscopic Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Z; Vijayan, S; Oines, A

    Purpose: To compare PCXMC and EGSnrc calculated organ and effective radiation doses from cone-beam computed tomography (CBCT) and interventional fluoroscopically-guided procedures using automatic exposure-event grouping. Methods: For CBCT, we used PCXMC20Rotation.exe to automatically calculate the doses and compared the results to those calculated using EGSnrc with the Zubal patient phantom. For interventional procedures, we use the dose tracking system (DTS) which we previously developed to produce a log file of all geometry and exposure parameters for every x-ray pulse during a procedure, and the data in the log file is input into PCXMC and EGSnrc for dose calculation. A MATLABmore » program reads data from the log files and groups similar exposures to reduce calculation time. The definition files are then automatically generated in the format used by PCXMC and EGSnrc. Processing is done at the end of the procedure after all exposures are completed. Results: For the Toshiba Infinix CBCT LCI-Middle-Abdominal protocol, most organ doses calculated with PCXMC20Rotation closely matched those calculated with EGSnrc. The effective doses were 33.77 mSv with PCXMC20Rotation and 32.46 mSv with EGSnrc. For a simulated interventional cardiac procedure, similar close agreement in organ dose was obtained between the two codes; the effective doses were 12.02 mSv with PCXMC and 11.35 mSv with EGSnrc. The calculations can be completed on a PC without manual intervention in less than 15 minutes with PCXMC and in about 10 hours with EGSnrc, depending on the level of data grouping and accuracy desired. Conclusion: Effective dose and most organ doses in CBCT and interventional radiology calculated by PCXMC closely match those calculated by EGSnrc. Data grouping, which can be done automatically, makes the calculation time with PCXMC on a standard PC acceptable. This capability expands the dose information that can be provided by the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  5. Comparison of diagnostic capability of macular ganglion cell complex and retinal nerve fiber layer among primary open angle glaucoma, ocular hypertension, and normal population using Fourier-domain optical coherence tomography and determining their functional correlation in Indian population

    PubMed Central

    Barua, Nabanita; Sitaraman, Chitra; Goel, Sonu; Chakraborti, Chandana; Mukherjee, Sonai; Parashar, Hemandra

    2016-01-01

    Context: Analysis of diagnostic ability of macular ganglionic cell complex and retinal nerve fiber layer (RNFL) in glaucoma. Aim: To correlate functional and structural parameters and comparing predictive value of each of the structural parameters using Fourier-domain (FD) optical coherence tomography (OCT) among primary open angle glaucoma (POAG) and ocular hypertension (OHT) versus normal population. Setting and Design: Single centric, cross-sectional study done in 234 eyes. Materials and Methods: Patients were enrolled in three groups: POAG, ocular hypertensive and normal (40 patients in each group). After comprehensive ophthalmological examination, patients underwent standard automated perimetry and FD-OCT scan in optic nerve head and ganglion cell mode. The relationship was assessed by correlating ganglion cell complex (GCC) parameters with mean deviation. Results were compared with RNFL parameters. Statistical Analysis: Data were analyzed with SPSS, analysis of variance, t-test, Pearson's coefficient, and receiver operating curve. Results: All parameters showed strong correlation with visual field (P < 0.001). Inferior GCC had highest area under curve (AUC) for detecting glaucoma (0.827) in POAG from normal population. However, the difference was not statistically significant (P > 0.5) when compared with other parameters. None of the parameters showed significant diagnostic capability to detect OHT from normal population. In diagnosing early glaucoma from OHT and normal population, only inferior GCC had statistically significant AUC value (0.715). Conclusion: In this study, GCC and RNFL parameters showed equal predictive capability in perimetric versus normal group. In early stage, inferior GCC was the best parameter. In OHT population, single day cross-sectional imaging was not valuable. PMID:27221682

  6. Simulation of breaking waves using the high-order spectral method with laboratory experiments: wave-breaking energy dissipation

    NASA Astrophysics Data System (ADS)

    Seiffert, Betsy R.; Ducrozet, Guillaume

    2018-01-01

    We examine the implementation of a wave-breaking mechanism into a nonlinear potential flow solver. The success of the mechanism will be studied by implementing it into the numerical model HOS-NWT, which is a computationally efficient, open source code that solves for the free surface in a numerical wave tank using the high-order spectral (HOS) method. Once the breaking mechanism is validated, it can be implemented into other nonlinear potential flow models. To solve for wave-breaking, first a wave-breaking onset parameter is identified, and then a method for computing wave-breaking associated energy loss is determined. Wave-breaking onset is calculated using a breaking criteria introduced by Barthelemy et al. (J Fluid Mech https://arxiv.org/pdf/1508.06002.pdf, submitted) and validated with the experiments of Saket et al. (J Fluid Mech 811:642-658, 2017). Wave-breaking energy dissipation is calculated by adding a viscous diffusion term computed using an eddy viscosity parameter introduced by Tian et al. (Phys Fluids 20(6): 066,604, 2008, Phys Fluids 24(3), 2012), which is estimated based on the pre-breaking wave geometry. A set of two-dimensional experiments is conducted to validate the implemented wave breaking mechanism at a large scale. Breaking waves are generated by using traditional methods of evolution of focused waves and modulational instability, as well as irregular breaking waves with a range of primary frequencies, providing a wide range of breaking conditions to validate the solver. Furthermore, adjustments are made to the method of application and coefficient of the viscous diffusion term with negligible difference, supporting the robustness of the eddy viscosity parameter. The model is able to accurately predict surface elevation and corresponding frequency/amplitude spectrum, as well as energy dissipation when compared with the experimental measurements. This suggests the model is capable of calculating wave-breaking onset and energy dissipation successfully for a wide range of breaking conditions. The model is also able to successfully calculate the transfer of energy between frequencies due to wave focusing and wave breaking. This study is limited to unidirectional waves but provides a valuable basis for future application of the wave-breaking model to a multidirectional wave field. By including parameters for removing energy due to wave-breaking into a nonlinear potential flow solver, the risk of developing numerical instabilities due to an overturning wave is decreased, thereby increasing the application range of the model, including calculating more extreme sea states. A computationally efficient and accurate model for the generation of a nonlinear random wave field is useful for predicting the dynamic response of offshore vessels and marine renewable energy devices, predicting loads on marine structures, and in the study of open ocean wave generation and propagation in a realistic environment.

  7. Circular polarization of light by planet Mercury and enantiomorphism of its surface minerals.

    PubMed

    Meierhenrich, Uwe J; Thiemann, Wolfram H P; Barbier, Bernard; Brack, André; Alcaraz, Christian; Nahon, Laurent; Wolstencroft, Ray

    2002-04-01

    Different mechanisms for the generation of circular polarization by the surface of planets and satellites are described. The observed values for Venus, the Moon, Mars, and Jupiter obtained by photo-polarimetric measurements with Earth based telescopes, showed accordance with theory. However, for planet Mercury asymmetric parameters in the circular polarization were measured that do not fit with calculations. For BepiColombo, the ESA cornerstone mission 5 to Mercury, we propose to investigate this phenomenon using a concept which includes two instruments. The first instrument is a high-resolution optical polarimeter, capable to determine and map the circular polarization by remote scanning of Mercury's surface from the Mercury Planetary Orbiter MPO. The second instrument is an in situ sensor for the detection of the enantiomorphism of surface crystals and minerals, proposed to be included in the Mercury Lander MSE.

  8. Economic study of multipurpose advanced high-speed transport configurations

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nondimensional economic examination of a parametrically-derived set of supersonic transport aircraft was conducted. The measure of economic value was surcharged relative to subsonic airplane tourist-class yield. Ten airplanes were defined according to size, payload, and speed. The price, range capability, fuel burned, and block time were determined for each configuration, then operating costs and surcharges were calculated. The parameter with the most noticeable influence on nominal surcharge was found to be real (constant dollars) fuel price increase. A change in SST design Mach number from 2.4 to Mach 2.7 showed a very small surcharge advantage (on the order of 1 percent for the faster aircraft). Configuration design compromises required for an airplane to operate overland at supersonic speeds without causing sonic boom annoyance result in severe performance penalties and require high (more than 100 percent) surcharges.

  9. The joined wing - An overview. [aircraft tandem wings in diamond configurations

    NASA Technical Reports Server (NTRS)

    Wolkovitch, J.

    1985-01-01

    The joined wing is a new type of aircraft configuration which employs tandem wings arranged to form diamond shapes in plan view and front view. Wind-tunnel tests and finite-element structural analyses have shown that the joined wing provides the following advantages over a comparable wing-plus-tail system; lighter weight and higher stiffness, higher span-efficiency factor, higher trimmed maximum lift coefficient, lower wave drag, plus built-in direct lift and direct sideforce control capability. A summary is given of research performed on the joined wing. Calculated joined wing weights are correlated with geometric parameters to provide simple weight estimation methods. The results of low-speed and transonic wind-tunnel tests are summarized, and guidelines for design of joined-wing aircraft are given. Some example joined-wing designs are presented and related configurations having connected wings are reviewed.

  10. Entropy from State Probabilities: Hydration Entropy of Cations

    PubMed Central

    2013-01-01

    Entropy is an important energetic quantity determining the progression of chemical processes. We propose a new approach to obtain hydration entropy directly from probability density functions in state space. We demonstrate the validity of our approach for a series of cations in aqueous solution. Extensive validation of simulation results was performed. Our approach does not make prior assumptions about the shape of the potential energy landscape and is capable of calculating accurate hydration entropy values. Sampling times in the low nanosecond range are sufficient for the investigated ionic systems. Although the presented strategy is at the moment limited to systems for which a scalar order parameter can be derived, this is not a principal limitation of the method. The strategy presented is applicable to any chemical system where sufficient sampling of conformational space is accessible, for example, by computer simulations. PMID:23651109

  11. A second-order closure analysis of turbulent diffusion flames. [combustion physics

    NASA Technical Reports Server (NTRS)

    Varma, A. K.; Fishburne, E. S.; Beddini, R. A.

    1977-01-01

    A complete second-order closure computer program for the investigation of compressible, turbulent, reacting shear layers was developed. The equations for the means and the second order correlations were derived from the time-averaged Navier-Stokes equations and contain third order and higher order correlations, which have to be modeled in terms of the lower-order correlations to close the system of equations. In addition to fluid mechanical turbulence models and parameters used in previous studies of a variety of incompressible and compressible shear flows, a number of additional scalar correlations were modeled for chemically reacting flows, and a typical eddy model developed for the joint probability density function for all the scalars. The program which is capable of handling multi-species, multistep chemical reactions, was used to calculate nonreacting and reacting flows in a hydrogen-air diffusion flame.

  12. A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM

    NASA Astrophysics Data System (ADS)

    Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui

    2014-12-01

    Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.

  13. A tubular hybrid Halbach/axially-magnetized permanent-magnet linear machine

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Liu, Yong; Cheng, Luming; Liu, Jiaqi; Zheng, Ping

    2017-05-01

    A single-phase tubular permanent-magnet linear machine (PMLM) with hybrid Halbach/axially-magnetized PM arrays is proposed for free-piston Stirling power generation system. Machine topology and operating principle are elaborately illustrated. With the sinusoidal speed characteristic of the free-piston Stirling engine considered, the proposed machine is designed and calculated by finite-element analysis (FEA). The main structural parameters, such as outer radius of the mover, radial length of both the axially-magnetized PMs and ferromagnetic poles, axial length of both the middle and end radially-magnetized PMs, etc., are optimized to improve both the force capability and power density. Compared with the conventional PMLMs, the proposed machine features high mass and volume power density, and has the advantages of simple control and low converter cost. The proposed machine topology is applicable to tubular PMLMs with any phases.

  14. LANL compact laser pumping simulation. Final task report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, B.S.; White, J.

    1987-09-28

    Rockwell has been tasked with the objective of both qualitatively and quantitatively defining the performance of LANL Compact Laser coupling systems. The performance criteria of the system will be based upon the magnitude and uniformity of the energy distribution in the laser pumping rod. Once this is understood, it will then be possible to improve the device performance via changes in the system`s component parameters. For this study, the authors have chosen to use the Los Alamos Radiometry Code (LARC), which was previously developed by Rockwell. LARC, as an analysis tool, is well suited for this problem because the codemore » contains the needed photometric calculation capability and easily handles the three-dimensionality of the problem. Also, LARC`s internal graphics can provide very informative visual displays of the optical system.« less

  15. Error protection capability of space shuttle data bus designs

    NASA Technical Reports Server (NTRS)

    Proch, G. E.

    1974-01-01

    Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.

  16. Predicting the payload capability of cable logging systems including the effect of partial suspension

    Treesearch

    Gary D. Falk

    1981-01-01

    A systematic procedure for predicting the payload capability of running, live, and standing skylines is presented. Three hand-held calculator programs are used to predict payload capability that includes the effect of partial suspension. The programs allow for predictions for downhill yarding and for yarding away from the yarder. The equations and basic principles...

  17. Teaching the Concept of Gibbs Energy Minimization through Its Application to Phase-Equilibrium Calculation

    ERIC Educational Resources Information Center

    Privat, Romain; Jaubert, Jean-Noe¨l; Berger, Etienne; Coniglio, Lucie; Lemaitre, Ce´cile; Meimaroglou, Dimitrios; Warth, Vale´rie

    2016-01-01

    Robust and fast methods for chemical or multiphase equilibrium calculation are routinely needed by chemical-process engineers working on sizing or simulation aspects. Yet, while industrial applications essentially require calculation tools capable of discriminating between stable and nonstable states and converging to nontrivial solutions,…

  18. Computational study of some fluoroquinolones: Structural, spectral and docking investigations

    NASA Astrophysics Data System (ADS)

    Sayin, Koray; Karakaş, Duran; Kariper, Sultan Erkan; Sayin, Tuba Alagöz

    2018-03-01

    Quantum chemical calculations are performed over norfloxacin, tosufloxacin and levofloxacin. The most stable structures for each molecule are determined by thermodynamic parameters. Then the best level for calculations is determined by benchmark analysis. M062X/6-31 + G(d) level is used in calculations. IR, UV-VIS and NMR spectrum are calculated and examined in detail. Some quantum chemical parameters are calculated and the tendency of activity is recommended. Additionally, molecular docking calculations are performed between related compounds and a protein (ID: 2J9N).

  19. COUPLED FREE AND DISSOLVED PHASE TRANSPORT: NEW SIMULATION CAPABILITIES AND PARAMETER INVERSION

    EPA Science Inventory

    The vadose zone free-phase simulation capabilities of the US EPA Hydrocarbon Spill Screening Model (HSSM)have been linked with the 3-D multi-species dissolved-phase contaminant transport simulator MT3DMS.

  20. Quantum chemical calculations of Cr2O3/SnO2 using density functional theory method

    NASA Astrophysics Data System (ADS)

    Jawaher, K. Rackesh; Indirajith, R.; Krishnan, S.; Robert, R.; Das, S. Jerome

    2018-03-01

    Quantum chemical calculations have been employed to study the molecular effects produced by Cr2O3/SnO2 optimised structure. The theoretical parameters of the transparent conducting metal oxides were calculated using DFT / B3LYP / LANL2DZ method. The optimised bond parameters such as bond lengths, bond angles and dihedral angles were calculated using the same theory. The non-linear optical property of the title compound was calculated using first-order hyperpolarisability calculation. The calculated HOMO-LUMO analysis explains the charge transfer interaction between the molecule. In addition, MEP and Mulliken atomic charges were also calculated and analysed.

  1. Improvement of calculation method for electrical parameters of short network of ore-thermal furnaces

    NASA Astrophysics Data System (ADS)

    Aliferov, A. I.; Bikeev, R. A.; Goreva, L. P.

    2017-10-01

    The paper describes a new calculation method for active and inductive resistance of split interleaved current leads packages in ore-thermal electric furnaces. The method is developed on basis of regression analysis of dependencies of active and inductive resistances of the packages on their geometrical parameters, mutual disposition and interleaving pattern. These multi-parametric calculations have been performed with ANSYS software. The proposed method allows solving split current lead electrical parameters minimization and balancing problems for ore-thermal furnaces.

  2. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  3. Density functional calculations of the Mössbauer parameters in hexagonal ferrite SrFe12O19

    NASA Astrophysics Data System (ADS)

    Ikeno, Hidekazu

    2018-03-01

    Mössbauer parameters in a magnetoplumbite-type hexagonal ferrite, SrFe12O19, are computed using the all-electron band structure calculation based on the density functional theory. The theoretical isomer shift and quadrupole splitting are consistent with experimentally obtained values. The absolute values of hyperfine splitting parameters are found to be underestimated, but the relative scale can be reproduced. The present results validate the site-dependence of Mössbauer parameters obtained by analyzing experimental spectra of hexagonal ferrites. The results also show the usefulness of theoretical calculations for increasing the reliability of interpretation of the Mössbauer spectra.

  4. Thermal-hydraulic analysis capabilities and methods development at NYPA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1987-01-01

    The operation of a nuclear power plant must be regularly supported by various thermal-hydraulic (T/H) analyses that may include final safety analysis report (FSAR) design basis calculations and licensing evaluations and conservative and best-estimate analyses. The development of in-house T/H capabilities provides the following advantages: (a) it leads to a better understanding of the plant design basis and operating characteristics; (b) methods developed can be used to optimize plant operations and enhance plant safety; (c) such a capability can be used for design reviews, checking vendor calculations, and evaluating proposed plant modifications; and (d) in-house capability reduces the cost ofmore » analysis. This paper gives an overview of the T/H capabilities and current methods development activity within the engineering department of the New York Power Authority (NYPA) and will focus specifically on reactor coolant system (RCS) transients and plant dynamic response for non-loss-of-coolant accident events. This paper describes NYPA experience in performing T/H analyses in support of pressurized water reactor plant operation.« less

  5. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Gyekenyesi, John P.

    1988-01-01

    The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  6. Prediction of the explosion effect of aluminized explosives

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Xiang, Cong; Liang, HuiMin

    2013-05-01

    We present an approach to predict the explosion load for aluminized explosives using a numerical calculation. A code to calculate the species of detonation products of high energy ingredients and those of the secondary reaction of aluminum and the detonation products, velocity of detonation, pressure, temperature and JWL parameters of aluminized explosives has been developed in this study. Through numerical calculations carried out with this code, the predicted JWL parameters for aluminized explosives have been compared with those measured by the cylinder test. The predicted JWL parameters with this code agree with those measured by the cylinder test. Furthermore, the load of explosion for the aluminized explosive was calculated using the numerical simulation by using the JWL equation of state. The loads of explosion for the aluminized explosive obtained using the predicted JWL parameters have been compared with those using the measured JWL parameters. Both of them are almost the same. The numerical results using the predicted JWL parameters show that the explosion air shock wave is the strongest when the mass fraction of aluminum powder in the explosive mixtures is 30%. This result agrees with the empirical data.

  7. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  8. Development, Test, and Evaluation of Microwave Radar Water Level (MWWL) Sensors' Wave Measurement Capability

    NASA Astrophysics Data System (ADS)

    Iyer, S. K.; Heitsenrether, R.

    2015-12-01

    Waves can have a significant impact on many coastal operations including navigational safety, recreation, and even the economy. Despite this, as of 2009, there were only 181 in situ real-time wave observation networks nationwide (IOOS 2009). There has recently been interest in adding real-time wave measurement systems to already existing NOAA Center for Operational Oceanographic Products and Services (CO-OPS) stations. Several steps have already been taken in order to achieve this, such as integrating information from existing wave measurement buoys and initial testing of multiple different wave measurement systems (Heitsenrether et al. 2012). Since wave observations can be derived from high frequency water level changes, we will investigate water level sensors' capability to measure waves. Recently, CO-OPS has been transitioning to new microwave radar water level (MWWL) sensors which have higher resolution and theoretically a greater potential wave measurement capability than the acoustic sensors in stilling wells. In this study, we analyze the wave measurement capability of MWWL sensors at two high energy wave environments, Duck, NC and La Jolla, CA, and compare results to two "reference" sensors (A Nortek acoustic waves and currents profiler (AWAC) at Duck and a single point pressure sensor at La Jolla). A summary of results from the two field test sites will be presented, including comparisons of wave energy spectra, significant wave height, and peak period measured by the test MWWL sensors and both reference AWAC and pressure sensors. In addition, relationships between MWWL versus reference wave sensor differences and specific wave conditions will be discussed. Initial results from spectral analysis and the calculation of bulk wave parameters indicate that MWWL sensors set to the "NoFilter" processing setting can produce wave measurements capability that compare well to the two reference sensors. These results support continued development to enable the installation of MWWL sensors at CO-OPS locations as a method of measuring waves.

  9. Improving high-altitude emp modeling capabilities by using a non-equilibrium electron swarm model to monitor conduction electron evolution

    NASA Astrophysics Data System (ADS)

    Pusateri, Elise Noel

    An Electromagnetic Pulse (EMP) can severely disrupt the use of electronic devices in its path causing a significant amount of infrastructural damage. EMP can also cause breakdown of the surrounding atmosphere during lightning discharges. This makes modeling EMP phenomenon an important research effort in many military and atmospheric physics applications. EMP events include high-energy Compton electrons or photoelectrons that ionize air and produce low energy conduction electrons. A sufficient number of conduction electrons will damp or alter the EMP through conduction current. Therefore, it is important to understand how conduction electrons interact with air in order to accurately predict the EMP evolution and propagation in the air. It is common for EMP simulation codes to use an equilibrium ohmic model for computing the conduction current. Equilibrium ohmic models assume the conduction electrons are always in equilibrium with the local instantaneous electric field, i.e. for a specific EMP electric field, the conduction electrons instantaneously reach steady state without a transient process. An equilibrium model will work well if the electrons have time to reach their equilibrium distribution with respect to the rise time or duration of the EMP. If the time to reach equilibrium is comparable or longer than the rise time or duration of the EMP then the equilibrium model would not accurately predict the conduction current necessary for the EMP simulation. This is because transport coefficients used in the conduction current calculation will be found based on equilibrium reactions rates which may differ significantly from their non-equilibrium values. We see this deficiency in Los Alamos National Laboratory's EMP code, CHAP-LA (Compton High Altitude Pulse-Los Alamos), when modeling certain EMP scenarios at high altitudes, such as upward EMP, where the ionization rate by secondary electrons is over predicted by the equilibrium model, causing the EMP to short abruptly. The objective of the PhD research is to mitigate this effect by integrating a conduction electron model into CHAP-LA which can calculate the conduction current based on a non-equilibrium electron distribution. We propose to use an electron swarm model to monitor the time evolution of conduction electrons in the EMP environment which is characterized by electric field and pressure. Swarm theory uses various collision frequencies and reaction rates to study how the electron distribution and the resultant transport coefficients change with time, ultimately reaching an equilibrium distribution. Validation of the swarm model we develop is a necessary step for completion of the thesis work. After validation, the swarm model is integrated in the air chemistry model CHAP-LA employs for conduction electron simulations. We test high altitude EMP simulations with the swarm model option in the air chemistry model to show improvements in the computational capability of CHAP-LA. A swarm model has been developed that is based on a previous swarm model developed by Higgins, Longmire and O'Dell 1973, hereinafter HLO. The code used for the swarm model calculation solves a system of coupled differential equations for electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, including the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are recalculated and compared to the previously reported empirical results given by HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford 2005. BOLSIG+ utilizes updated electron scattering cross sections that are defined over an expanded energy range found in the atomic and molecular cross section database published by Phelps in the Phelps Database 2014 on the LXcat website created by Pancheshnyi et al. 2012. The swarm model is also updated from the original HLO model by including additional physical parameters such as the O2 electron attachment rate, recombination rate, and mutual neutralization rate. This necessitates tracking the positive and negative ion densities in the swarm model. Adding these parameters, especially electron attachment, is important at lower EMP altitudes where atmospheric density is high. We compare swarm model equilibrium temperatures and times using the HLO and BOLSIG+ coefficients for a uniform electric field of 1 StatV/cm for a range of atmospheric heights. This is done in order to test sensitivity to the swarm parameters used in the swarm model. It is shown that the equilibrium temperature and time are sensitive to the modifications in the collision frequency and ionization rate based on the updated electron interaction cross sections. We validate the swarm model by comparing ionization coefficients and equilibrium drift velocities to experimental results over a wide range of reduced electric field values. The final part of the PhD thesis work includes integrating the swarm model into CHAP-LA. We discuss the physics included in the CHAP-LA EMP model and demonstrate EMP damping behavior caused by the ohmic model at high altitudes. We report on numerical techniques for incorporation of the swarm model into CHAP-LA's Maxwell solver. This includes a discussion of integration techniques for Maxwell's equations in CHAP-LA using the swarm model calculated conduction current. We show improvements on EMP parameter calculations when modeling a high altitude, upward EMP scenario. This provides a novel computational capability that will have an important impact on the atmospheric and EMP research community.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Bin; Li, Yongbao; Liu, Bo

    Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensitymore » profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated results agreed well with the film measurement of both Iris collimators and the half-beam blocked field, fared much better than the Ray-Tracing calculation. Conclusions: The authors have developed a pencil beam dose calculation model for the CyberKnife system. The dose calculation accuracy is better than the standard linac based system because the model parameters were specifically tuned to the CyberKnife system and geometry correction factors. The model handles better the lateral scatter and has the potential to be used for the irregularly shaped fields. Comprehensive validations on MLC equipped system are necessary for its clinical implementation. It is reasonably fast enough to be used during plan optimization.« less

  11. Rail Inspection Systems Analysis and Technology Survey

    DOT National Transportation Integrated Search

    1977-09-01

    The study was undertaken to identify existing rail inspection system capabilities and methods which might be used to improve these capabilities. Task I was a study to quantify existing inspection parameters and Task II was a cost effectiveness study ...

  12. Local order parameters for use in driving homogeneous ice nucleation with all-atom models of water

    NASA Astrophysics Data System (ADS)

    Reinhardt, Aleks; Doye, Jonathan P. K.; Noya, Eva G.; Vega, Carlos

    2012-11-01

    We present a local order parameter based on the standard Steinhardt-Ten Wolde approach that is capable both of tracking and of driving homogeneous ice nucleation in simulations of all-atom models of water. We demonstrate that it is capable of forcing the growth of ice nuclei in supercooled liquid water simulated using the TIP4P/2005 model using over-biassed umbrella sampling Monte Carlo simulations. However, even with such an order parameter, the dynamics of ice growth in deeply supercooled liquid water in all-atom models of water are shown to be very slow, and so the computation of free energy landscapes and nucleation rates remains extremely challenging.

  13. High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method

    NASA Astrophysics Data System (ADS)

    Bowden, Mike; Neal, William

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  14. Anlysis capabilities for plutonium-238 programs

    NASA Astrophysics Data System (ADS)

    Wong, A. S.; Rinehart, G. H.; Reimus, M. H.; Pansoy-Hjelvik, M. E.; Moniz, P. F.; Brock, J. C.; Ferrara, S. E.; Ramsey, S. S.

    2000-07-01

    In this presentation, an overview of analysis capabilities that support 238Pu programs will be discussed. These capabilities include neutron emission rate and calorimetric measurements, metallography/ceramography, ultrasonic examination, particle size determination, and chemical analyses. The data obtained from these measurements provide baseline parameters for fuel clad impact testing, fuel processing, product certifications, and waste disposal. Also several in-line analyses capabilities will be utilized for process control in the full-scale 238Pu Aqueous Scrap Recovery line in FY01.

  15. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  16. Rheological characterization of neutral and anionic polysaccharides with reduced mucociliary transport rates.

    PubMed

    Shah, Ankur J; Donovan, Maureen D

    2007-04-20

    The purpose of this research was to compare the viscoelastic properties of several neutral and anionic polysaccharide polymers with their mucociliary transport rates (MTR) across explants of ciliated bovine tracheal tissue to identify rheologic parameters capable of predicting the extent of reduction in mucociliary transport. The viscoelastic properties of the polymer gels and gels mixed with mucus were quantified using controlled stress rheometry. In general, the anionic polysaccharides were more efficient at decreasing the mucociliary transport rate than were the neutral polymers, and a concentration threshold, where no further decreases in mucociliary transport occurred with increasing polymer concentration, was observed for several of the neutral polysaccharides. No single rheologic parameter (eta, G', G'', tan delta, G*) was a good predictor of the extent of mucociliary transport reduction, but a combination of the apparent viscosity (eta), tangent to the phase angle (tan delta), and complex modulus (G*) was found to be useful in the identification of formulations capable of decreasing MTR. The relative values of each of the rheologic parameters were unique for each polymer, yet once the relationships between the rheologic parameters and mucociliary transport rate reduction were determined, formulations capable of resisting mucociliary clearance could be rapidly optimized.

  17. Microphysical modelling of volcanic plumes / Comparisons against groundbased and spaceborne lidar data

    NASA Astrophysics Data System (ADS)

    Jumelet, Julien; Bekki, Slimane; Keckhut, Philippe

    2017-04-01

    We present a high-resolution isentropic microphysical transport model dedicated to stratospheric aerosols and clouds. The model is based on the MIMOSA model (Modélisation Isentrope du transport Méso-échelle de l'Ozone Stratosphérique par Advection) and adds several modules: a fully explicit size-resolving microphysical scheme to transport aerosol granulometry as passive tracers and an optical module, able to calculate the scattering and extinction properties of particles at given wavelengths. Originally designed for polar stratospheric clouds (composed of sulfuric acid, nitric acid and water vapor), the model is fully capable of rendering the structure and properties of volcanic plumes at the finer scales, assuming complete SO2 oxydation. This link between microphysics and optics also enables the model to take advantage of spaceborne lidar data (i.e. CALIOP) by calculating the 532nm aerosol backscatter coefficient, taking it as the control variable to provide microphysical constraints during the transport. This methodology has been applied to simulate volcanic plumes during relatively recent volcanic eruptions, from the 2010 Merapi to the 2015 Calbuco eruption. Optical calculations are also used for direct comparisons between the model and groundbased lidar stations for validation as well as characterization purposes. We will present the model and the simulation results, along with a focus on the sensitivity to initialisation parameters, considering the need for quasi-real time modelling and forecasts in the case of future eruptions.

  18. Composition dependent band offsets of ZnO and its ternary alloys

    NASA Astrophysics Data System (ADS)

    Yin, Haitao; Chen, Junli; Wang, Yin; Wang, Jian; Guo, Hong

    2017-01-01

    We report the calculated fundamental band gaps of wurtzite ternary alloys Zn1-xMxO (M = Mg, Cd) and the band offsets of the ZnO/Zn1-xMxO heterojunctions, these II-VI materials are important for electronics and optoelectronics. Our calculation is based on density functional theory within the linear muffin-tin orbital (LMTO) approach where the modified Becke-Johnson (MBJ) semi-local exchange is used to accurately produce the band gaps, and the coherent potential approximation (CPA) is applied to deal with configurational average for the ternary alloys. The combined LMTO-MBJ-CPA approach allows one to simultaneously determine both the conduction band and valence band offsets of the heterojunctions. The calculated band gap data of the ZnO alloys scale as Eg = 3.35 + 2.33x and Eg = 3.36 - 2.33x + 1.77x2 for Zn1-xMgxO and Zn1-xCdxO, respectively, where x being the impurity concentration. These scaling as well as the composition dependent band offsets are quantitatively compared to the available experimental data. The capability of predicting the band parameters and band alignments of ZnO and its ternary alloys with the LMTO-CPA-MBJ approach indicate the promising application of this method in the design of emerging electronics and optoelectronics.

  19. The use of computational thermodynamics for the determination of surface tension and Gibbs-Thomson coefficient of multicomponent alloys

    NASA Astrophysics Data System (ADS)

    Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.

    2018-04-01

    The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.

  20. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  1. A water quality index model using stepwise regression and neural networks models for the Piabanha River basin in Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    Villas Boas, M. D.; Olivera, F.; Azevedo, J. S.

    2013-12-01

    The evaluation of water quality through 'indexes' is widely used in environmental sciences. There are a number of methods available for calculating water quality indexes (WQI), usually based on site-specific parameters. In Brazil, WQI were initially used in the 1970s and were adapted from the methodology developed in association with the National Science Foundation (Brown et al, 1970). Specifically, the WQI 'IQA/SCQA', developed by the Institute of Water Management of Minas Gerais (IGAM), is estimated based on nine parameters: Temperature Range, Biochemical Oxygen Demand, Fecal Coliforms, Nitrate, Phosphate, Turbidity, Dissolved Oxygen, pH and Electrical Conductivity. The goal of this study was to develop a model for calculating the IQA/SCQA, for the Piabanha River basin in the State of Rio de Janeiro (Brazil), using only the parameters measurable by a Multiparameter Water Quality Sonde (MWQS) available in the study area. These parameters are: Dissolved Oxygen, pH and Electrical Conductivity. The use of this model will allow to further the water quality monitoring network in the basin, without requiring significant increases of resources. The water quality measurement with MWQS is less expensive than the laboratory analysis required for the other parameters. The water quality data used in the study were obtained by the Geological Survey of Brazil in partnership with other public institutions (i.e. universities and environmental institutes) as part of the project "Integrated Studies in Experimental and Representative Watersheds". Two models were developed to correlate the values of the three measured parameters and the IQA/SCQA values calculated based on all nine parameters. The results were evaluated according to the following validation statistics: coefficient of determination (R2), Root Mean Square Error (RMSE), Akaike information criterion (AIC) and Final Prediction Error (FPE). The first model was a linear stepwise regression between three independent variables (input) and one dependent variable (output) to establish an equation relating input to output. This model produced the following statistics: R2 = 0.85, RMSE = 6.19, AIC =0.65 and FPE = 1.93. The second model was a Feedforward Neural Network with one tan-sigmoid hidden layer (4 neurons) and one linear output layer. The neural network was trained based on a backpropagation algorithm using the input as predictors and the output as target. The following statistics were found: R2 = 0.95, RMSE = 4.86, AIC= 0.33 and FPE = 1.39. The second model produced a better fit than the first one, having a greater R2 and smaller RMSE, AIC and FPE. The best performance of the second method can be attributed to the fact that the water quality parameters often exhibit nonlinear behaviors and neural networks are capable of representing nonlinear relationship efficiently, while the regression is limited to linear relationships. References: Brown, R.M., McLelland, N.I., Deininger, R.A., Tozer, R.G.1970. A Water Quality Index-Do we dare? Water & Sewage Works, October: 339-343.

  2. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  3. Modeling Aircraft Position and Conservatively Calculating Airspace Violations for an Autonomous Collision Awareness System for Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Ueunten, Kevin K.

    With the scheduled 30 September 2015 integration of Unmanned Aerial System (UAS) into the national airspace, the Federal Aviation Administration (FAA) is concerned with UAS capabilities to sense and avoid conflicts. Since the operator is outside the cockpit, the proposed collision awareness plugin (CAPlugin), based on probability and error propagation, conservatively predicts potential conflicts with other aircraft and airspaces, thus increasing the operator's situational awareness. The conflict predictions are calculated using a forward state estimator (FSE) and a conflict calculator. Predicting an aircraft's position, modeled as a mixed Gaussian distribution, is the FSE's responsibility. Furthermore, the FSE supports aircraft engaged in the following three flight modes: free flight, flight path following and orbits. The conflict calculator uses the FSE result to calculate the conflict probability between an aircraft and airspace or another aircraft. Finally, the CAPlugin determines the highest conflict probability and warns the operator. In addition to discussing the FSE free flight, FSE orbit and the airspace conflict calculator, this thesis describes how each algorithm is implemented and tested. Lastly two simulations demonstrates the CAPlugin's capabilities.

  4. Mitigation of Engine Inlet Distortion Through Adjoint-Based Design

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Rallabhandi, Sriram; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    The adjoint-based design capability in FUN3D is extended to allow efficient gradient- based optimization and design of concepts with highly integrated aero-propulsive systems. A circumferential distortion calculation, along with the derivatives needed to perform adjoint-based design, have been implemented in FUN3D. This newly implemented distortion calculation can be used not only for design but also to drive the existing mesh adaptation process and reduce the error associated with the fan distortion calculation. The design capability is demonstrated by the shape optimization of an in-house aircraft concept equipped with an aft fuselage propulsor. The optimization objective is the minimization of flow distortion at the aerodynamic interface plane of this aft fuselage propulsor.

  5. Application of DYNA3D in large scale crashworthiness calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, D.J.; Hallquist, J.O.; Igarashi, M.

    1986-01-01

    This paper presents an example of an automobile crashworthiness calculation. Based on our experiences with the example calculation, we make recommendations to those interested in performing crashworthiness calculations. The example presented in this paper was supplied by Suzuki Motor Co., Ltd., and provided a significant shakedown for the new large deformation shell capability of the DYNA3D code. 15 refs., 3 figs.

  6. Advanced materials for 193-nm resists

    NASA Astrophysics Data System (ADS)

    Ushirogouchi, Tohru; Asakawa, Koji; Shida, Naomi; Okino, Takeshi; Saito, Satoshi; Funaki, Yoshinori; Takaragi, Akira; Tsutsumi, Kentaro; Nakano, Tatsuya

    2000-06-01

    Acrylate monomers containing alicyclic side chains featuring a series of polar substituent groups were assumed to be model compounds. Solubility parameters were calculated for the corresponding acrylate polymers. These acrylate monomers were synthesized using a novel aerobic oxidation reaction employing N-hydroxyphtalimide (NHPI) as a catalyst, and then polymerized. These reactions were confirmed to be applicable for the mass-production of those compounds. The calculation results agreed with the hydrophilic parameters measured experimentally. Moreover, the relationship between the resist performance and the above-mentioned solubility parameter has been studied. As a result, a correlation between the resist performance and the calculated solubility parameter was observed. Finally, resolution of 0.13-micron patterns, based on the 1G DRAM design rule, could be successfully fabricated by optimizing the solubility parameter and the resist composition.

  7. Diagnostic capability of scanning laser polarimetry with and without enhanced corneal compensation and optical coherence tomography.

    PubMed

    Benítez-del-Castillo, Javier; Martinez, Antonio; Regi, Teresa

    2011-01-01

    To compare the abilities of the current commercially available versions of scanning laser polarimetry (SLP) and optical coherence tomography (OCT), SLP-variable corneal compensation (VCC), SLP-enhanced corneal compensation (ECC), and high-definition (HD) OCT, in discriminating between healthy eyes and those with early-to-moderate glaucomatous visual field loss. Healthy volunteers and patients with glaucoma who met the eligibility criteria were consecutively enrolled in this prospective, cross-sectional, observational study. Subjects underwent complete eye examination, automated perimetry, SLP-ECC, SLP-VCC, and HD-OCT. Scanning laser polarimetry parameters were recalculated in 90-degree segments (quadrants) in the calculation circle to be compared. Areas under the receiver operating characteristic curve (AUROCs) were calculated for every parameter in order to compare the ability of each imaging modality to differentiate between normal and glaucomatous eyes. Fifty-five normal volunteers (mean age 59.1 years) and 33 patients with glaucoma (mean age 63.8 years) were enrolled. Average visual field mean deviation was -6.69 dB (95% confidence interval -8.07 to -5.31) in the glaucoma group. The largest AUROCs were associated with nerve fiber indicator (0.880 and 0.888) for the SLP-VCC and SLP-ECC, respectively, and with the average thickness in the HD-OCT (0.897). The best performing indices for the SLP-VCC, SLP-ECC, and HD OCT gave similar AUROCs, showing moderate diagnostic accuracy in patients with early to moderate glaucoma. Further studies are needed to evaluate the ability of these technologies to discriminate between normal and glaucomatous eyes.

  8. Assessing flow paths in a karst aquifer based on multiple dye tracing tests using stochastic simulation and the MODFLOW-CFP code

    NASA Astrophysics Data System (ADS)

    Assari, Amin; Mohammadi, Zargham

    2017-09-01

    Karst systems show high spatial variability of hydraulic parameters over small distances and this makes their modeling a difficult task with several uncertainties. Interconnections of fractures have a major role on the transport of groundwater, but many of the stochastic methods in use do not have the capability to reproduce these complex structures. A methodology is presented for the quantification of tortuosity using the single normal equation simulation (SNESIM) algorithm and a groundwater flow model. A training image was produced based on the statistical parameters of fractures and then used in the simulation process. The SNESIM algorithm was used to generate 75 realizations of the four classes of fractures in a karst aquifer in Iran. The results from six dye tracing tests were used to assign hydraulic conductivity values to each class of fractures. In the next step, the MODFLOW-CFP and MODPATH codes were consecutively implemented to compute the groundwater flow paths. The 9,000 flow paths obtained from the MODPATH code were further analyzed to calculate the tortuosity factor. Finally, the hydraulic conductivity values calculated from the dye tracing experiments were refined using the actual flow paths of groundwater. The key outcomes of this research are: (1) a methodology for the quantification of tortuosity; (2) hydraulic conductivities, that are incorrectly estimated (biased low) with empirical equations that assume Darcian (laminar) flow with parallel rather than tortuous streamlines; and (3) an understanding of the scale-dependence and non-normal distributions of tortuosity.

  9. Friction coefficient of skin in real-time.

    PubMed

    Sivamani, Raja K; Goodman, Jack; Gitis, Norm V; Maibach, Howard I

    2003-08-01

    Friction studies are useful in quantitatively investigating the skin surface. Previous studies utilized different apparatuses and materials for these investigations but there was no real-time test parameter control or monitoring. Our studies incorporated the commercially available UMT Series Micro-Tribometer, a tribology instrument that permits real-time monitoring and calculation of the important parameters in friction studies, increasing the accuracy over previous tribology and friction measurement devices used on skin. Our friction tests were performed on four healthy volunteers and on abdominal skin samples. A stainless steel ball was pressed on to the skin with at a pre-set load and then moved across the skin at a constant velocity of 5 mm/min. The UMT continuously monitored the friction force of the skin and the normal force of the ball to calculate the friction coefficient in real-time. Tests investigated the applicability of Amonton's law, the impact of increased and decreased hydration, and the effect of the application of moisturizers. The friction coefficient depends on the normal load applied, and Amonton's law does not provide an accurate description for the skin surface. Application of water to the skin increased the friction coefficient and application of isopropyl alcohol decreased it. Fast acting moisturizers immediately increased the friction coefficient, but did not have the prolonged effect of the slow, long lasting moisturizers. The UMT is capable of making real-time measurements on the skin and can be used as an effective tool to study friction properties. Results from the UMT measurements agree closely with theory regarding the skin surface.

  10. Steering characteristic of an articulated bus under quasi steady maneuvering

    NASA Astrophysics Data System (ADS)

    Ubaidillah, Setiawan, Budi Agus; Aridharma, Airlangga Putra; Lenggana, Bhre Wangsa; Caesar, Bernardus Placenta Previo

    2018-02-01

    Articulated buses have been being preferred as public transportation modes due to their operational capacity. Therefore, passenger safety must be the priority of this public service vehicle. This research focused on the analytical approach of steering characteristics of an articulated bus when it maneuvered steadily. Such turning condition could be referred as a stability parameter of the bus for preliminary handling assessment. The analytical approach employed kinematics relationship between front and rear bodies as well as steering capabilities. The quasi steady model was developed to determine steering parameters such as turning radius, oversteer, and understeer. The mathematical model was useful for determining both coefficients of understeer and oversteer. The dimension of articulated bus followed a commonly used bus as utilized in Trans Jakarta busses. Based on the simulation, for one minimum center of the body, the turning radius was calculated about 8.8 m and 7.6 m at steady turning speed of 10 km/h. In neutral condition, the minimum road radius should be 6.5 m at 10 km/h and 6.9 m at 40 km/h. For two centers of the body and oversteer condition, the front body has the turning radius of 8.8 m, while, the rear body has the turning radius of 9.8 m at both turning speeds of 40 km/h. The other steering parameters were discussed accordingly.

  11. Extension of TOPAS for the simulation of proton radiation effects considering molecular and cellular endpoints

    NASA Astrophysics Data System (ADS)

    Polster, Lisa; Schuemann, Jan; Rinaldi, Ilaria; Burigo, Lucas; McNamara, Aimee L.; Stewart, Robert D.; Attili, Andrea; Carlson, David J.; Sato, Tatsuhiko; Ramos Méndez, José; Faddegon, Bruce; Perl, Joseph; Paganetti, Harald

    2015-07-01

    The aim of this work is to extend a widely used proton Monte Carlo tool, TOPAS, towards the modeling of relative biological effect (RBE) distributions in experimental arrangements as well as patients. TOPAS provides a software core which users configure by writing parameter files to, for instance, define application specific geometries and scoring conditions. Expert users may further extend TOPAS scoring capabilities by plugging in their own additional C++ code. This structure was utilized for the implementation of eight biophysical models suited to calculate proton RBE. As far as physics parameters are concerned, four of these models are based on the proton linear energy transfer, while the others are based on DNA double strand break induction and the frequency-mean specific energy, lineal energy, or delta electron generated track structure. The biological input parameters for all models are typically inferred from fits of the models to radiobiological experiments. The model structures have been implemented in a coherent way within the TOPAS architecture. Their performance was validated against measured experimental data on proton RBE in a spread-out Bragg peak using V79 Chinese Hamster cells. This work is an important step in bringing biologically optimized treatment planning for proton therapy closer to the clinical practice as it will allow researchers to refine and compare pre-defined as well as user-defined models.

  12. Extension of TOPAS for the simulation of proton radiation effects considering molecular and cellular endpoints

    PubMed Central

    Polster, Lisa; Schuemann, Jan; Rinaldi, Ilaria; Burigo, Lucas; McNamara, Aimee L.; Stewart, Robert D.; Attili, Andrea; Carlson, David J.; Sato, Tatsuhiko; Méndez, José Ramos; Faddegon, Bruce; Perl, Joseph; Paganetti, Harald

    2015-01-01

    The aim of this work is to extend a widely used proton Monte Carlo tool, TOPAS, towards the modeling of relative biological effect (RBE) distributions in experimental arrangements as well as patients. TOPAS provides a software core which users configure by writing parameter files to, for instance, define application specific geometries and scoring conditions. Expert users may further extend TOPAS scoring capabilities by plugging in their own additional C++ code. This structure was utilized for the implementation of eight biophysical models suited to calculate proton RBE. As far as physics parameters are concerned, four of these models are based on the proton linear energy transfer (LET), while the others are based on DNA Double Strand Break (DSB) induction and the frequency-mean specific energy, lineal energy, or delta electron generated track structure. The biological input parameters for all models are typically inferred from fits of the models to radiobiological experiments. The model structures have been implemented in a coherent way within the TOPAS architecture. Their performance was validated against measured experimental data on proton RBE in a spread-out Bragg peak using V79 Chinese Hamster cells. This work is an important step in bringing biologically optimized treatment planning for proton therapy closer to the clinical practice as it will allow researchers to refine and compare pre-defined as well as user-defined models. PMID:26061666

  13. Parameter Calibration of GTN Damage Model and Formability Analysis of 22MnB5 in Hot Forming Process

    NASA Astrophysics Data System (ADS)

    Ying, Liang; Liu, Wenquan; Wang, Dantong; Hu, Ping

    2017-11-01

    Hot forming of high strength steel at elevated temperatures is an attractive technology to achieve the lightweight of vehicle body. The mechanical behavior of boron steel 22MnB5 strongly depends on the variation of temperature which makes the process design more difficult. In this paper, the Gurson-Tvergaard-Needleman (GTN) model is used to study the formability of 22MnB5 sheet at different temperatures. Firstly, the rheological behavior of 22MnB5 is analyzed through a series of hot tensile tests at a temperature range of 600-800 °C. Then, a detailed process to calibrate the damage parameters is given based on the response surface methodology and genetic algorithm method. The GTN model together with the damage parameters calibrated is then implemented to simulate the deformation and damage evolution of 22MnB5 in the process of high-temperature Nakazima test. The capability of the GTN model as a suitable tool to evaluate the sheet formability is confirmed by comparing experimental and calculated results. Finally, as a practical application, the forming limit diagram of 22MnB5 at 700 °C is constructed using the Nakazima simulation and Marciniak-Kuczynski (M-K) model, respectively. And the simulation integrated GTN model shows a higher reliability by comparing the predicted results of these two approaches with the experimental ones.

  14. [Mathematic modeling and experimental validation of macrostate quality expression for multicomponent in Chinese materia medica].

    PubMed

    He, Fuyuan; Deng, Kaiwen; Shi, Jilian; Liu, Wenlong; Pi, Fengjuan

    2011-11-01

    To establish the unitive multicomponent quality system bridged macrostate mathematic model parameters of material quality and microstate component concentration for Chinese materia medica (CMM). According to law of biologic laws of thermodynamics, the state functions of macrostate qulity of the CMM were established. The validation test was carried out as modeling drug as alcohol extract of Radix Rhozome (AERR), their enthalpy of combustion was determined, and entropy and the capability of information by chromatographic fingerprint were assayed, and then the biologic apparent macrostate parameters were calculated. The biologic macrostate mathematic models, for the CMM quality controll, were established as parameters as the apparent equilibrium constant, biologic enthalpy, Gibbs free energy and biologic entropy etc. The total molarity for the 10 batchs of AERR were 0.153 4 mmol x g(-1) with 28.26% of RSD, with the average of apparent equilibrium constants, biologic enthalpy, Gibbs free energy and biologic entropy were 0.039 65, 8 005 J x mol(-1), -2.408 x 10(7) J x mol(-1) and - 8.078 x 10(4) J x K(-1) with RSD as 6.020%, 1.860%, 42.32% and 42.31%, respectively. The macrostate quality models for CMM can represent their intrinsic quality for multicomponent dynamic system such as the CMM, to manifest out as if the forest away from or tree near from to see it.

  15. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.

  16. The Role of the Programmable Calculator in the Medical Environment

    PubMed Central

    Winner, P.; Moller, J.

    1981-01-01

    A general approach to the establishment of programmable calculators as tools in health care is presented. We will discuss capabilities, applicability, disadvantages and future trends. Also the specific applicability to Creatinine Clearance programs is given

  17. Surface retention and photochemical reactivity of the diphenylether herbicide oxyfluorfen.

    PubMed

    Scrano, Laura; Bufo, Sabino A; Cataldi, Tommaso R I; Albanis, Triantafyllos A

    2004-01-01

    The photochemical behavior of oxyfluorfen [2-chloro-1-(3-etoxy-4-nitrophenoxy)-4-(trifluoromethyl) benzene] on two Greek soils was investigated. Soils were sampled from Nea Malgara and Preveza regions, characterized by a different organic matter content. Soils were spiked with the diphenyl-ether herbicide and irradiation experiments were performed either in the laboratory with a solar simulator (xenon lamp) or outside, under natural sunlight irradiation; other soil samples were kept in the dark to control the retention reaction. Kinetic parameters of both retention and photochemical reactions were calculated using zero-, first- and second- (Langmuir-Hinshelwood) order equations, and best fit was checked through statistical analysis. The soil behaviors were qualitatively similar but quantitatively different, with the soil sampled from the Nea Malgara region much more sorbent as compared with Preveza soil. All studied reactions followed second-order kinetics and photochemical reactions were influenced by retaining capability of the soils. The contributions of the photochemical processes to the global dissipation rates were also calculated. Two main metabolites were identified as 2-chloro-1-(3-ethoxy-4-hydroxyphenoxy)-4-(trifluoromethyl)benzene and 2-chloro-1- (3-hydroxy-4-nitrophenoxy)-4-(trifluoromethyl)benzene.

  18. Self-aligning exoskeleton hip joint: Kinematic design with five revolute, three prismatic and one ball joint.

    PubMed

    Beil, Jonas; Marquardt, Charlotte; Asfour, Tamim

    2017-07-01

    Kinematic compatibility is of paramount importance in wearable robotic and exoskeleton design. Misalignments between exoskeletons and anatomical joints of the human body result in interaction forces which make wearing the exoskeleton uncomfortable and even dangerous for the human. In this paper we present a kinematically compatible design of an exoskeleton hip to reduce kinematic incompatibilities, so called macro- and micro-misalignments, between the human's and exoskeleton's joint axes, which are caused by inter-subject variability and articulation. The resulting design consists of five revolute, three prismatic and one ball joint. Design parameters such as range of motion and joint velocities are calculated based on the analysis of human motion data acquired by motion capture systems. We show that the resulting design is capable of self-aligning to the human hip joint in all three anatomical planes during operation and can be adapted along the dorsoventral and mediolateral axis prior to operation. Calculation of the forward kinematics and FEM-simulation considering kinematic and musculoskeletal constraints proved sufficient mobility and stiffness of the system regarding the range of motion, angular velocity and torque admissibility needed to provide 50 % assistance for an 80 kg person.

  19. Extension of the TDCR model to compute counting efficiencies for radionuclides with complex decay schemes.

    PubMed

    Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch

    2014-05-01

    The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.

  20. The ITER ICRF Antenna Design with TOPICA

    NASA Astrophysics Data System (ADS)

    Milanesio, Daniele; Maggiora, Riccardo; Meneghini, Orso; Vecchi, Giuseppe

    2007-11-01

    TOPICA (Torino Polytechnic Ion Cyclotron Antenna) code is an innovative tool for the 3D/1D simulation of Ion Cyclotron Radio Frequency (ICRF), i.e. accounting for antennas in a realistic 3D geometry and with an accurate 1D plasma model [1]. The TOPICA code has been deeply parallelized and has been already proved to be a reliable tool for antennas design and performance prediction. A detailed analysis of the 24 straps ITER ICRF antenna geometry has been carried out, underlining the strong dependence and asymmetries of the antenna input parameters due to the ITER plasma response. We optimized the antenna array geometry dimensions to maximize loading, lower mutual couplings and mitigate sheath effects. The calculated antenna input impedance matrices are TOPICA results of a paramount importance for the tuning and matching system design. Electric field distributions have been also calculated and they are used as the main input for the power flux estimation tool. The designed optimized antenna is capable of coupling 20 MW of power to plasma in the 40 -- 55 MHz frequency range with a maximum voltage of 45 kV in the feeding coaxial cables. [1] V. Lancellotti et al., Nuclear Fusion, 46 (2006) S476-S499

  1. Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR

    NASA Astrophysics Data System (ADS)

    Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng

    2017-06-01

    The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.

  2. Analytical Approach in DeCoM

    NASA Technical Reports Server (NTRS)

    Patel, Deepak

    2011-01-01

    There are many papers on describing a LHP as an overall system, but few detail on the condenser section of a loop heat pipe. The DeCoM (Deepak Condenser Model) method utilizes user set initial parameters in-order to simulate a condenser by calculating the interactions between the fluid and the wall. Equations are derived for two sections of the condenser: a two-phase section and a subcooled (liquid) section. All Equations are based upon the conservation of energy theory, from which fluid temperature, and fluid quality values are solved. In order to solve for the heat transfer value, between fluid and the wall in two phase section, the Lockhart-Martinelli correlation method was implemented as a solution approach. For Liquid phase, the Reynolds number was used in-order to differentiate the flow state, from either turbulent or laminar, and Nusselt number was used to solve for the film coefficient. To represent these calculations for both sections a flow chart is presented in order to display the execution process of DeCoM. The benefit of DeCoM is that it is capable of performing preliminary analysis without requiring a license and without much of users knowledge on condensers.

  3. A general low frequency acoustic radiation capability for NASTRAN

    NASA Technical Reports Server (NTRS)

    Everstine, G. C.; Henderson, F. M.; Schroeder, E. A.; Lipman, R. R.

    1986-01-01

    A new capability called NASHUA is described for calculating the radiated acoustic sound pressure field exterior to a harmonically-excited arbitrary submerged 3-D elastic structure. The surface fluid pressures and velocities are first calculated by coupling a NASTRAN finite element model of the structure with a discretized form of the Helmholtz surface integral equation for the exterior fluid. After the fluid impedance is calculated, most of the required matrix operations are performed using the general matrix manipulation package (DMAP) available in NASTRAN. Far field radiated pressures are then calculated from the surface solution using the Helmholtz exterior integral equation. Other output quantities include the maximum sound pressure levels in each of the three coordinate planes, the rms and average surface pressures and normal velocities, the total radiated power and the radiation efficiency. The overall approach is illustrated and validated using known analytic solutions for submerged spherical shells subjected to both uniform and nonuniform applied loads.

  4. Changing the formula of residents' work hours in internal medicine: moving from "years in training" to "hours in training".

    PubMed

    Mansi, Ishak A

    2011-03-01

    In a recent report, the Institute of Medicine recommended more restrictions on residents' working hours. Several problems exist with a system that places a weekly limit on resident duty hours: (1) it assumes the presence of a linear relationship between hours of work and patient safety; (2) it fails to consider differences in intensity among programs; and (3) it does not address increases in the scientific content of medicine, and it places the burden of enforcing the duty hour limits on the Accreditation Council for Graduate Medical Education. An innovative method of calculating credit hours for graduate medical education would shift the focus from "years of residency" to "hours of residency." For example, internal medicine residents would be requested to spend 8640 hours of total training hours (assuming 60 hours per week for 48 weeks annually) instead of the traditional 3 years. This method of counting training hours is used by other professions, such as the Intern Development Program of the National Council of Architectural Registration Boards. The proposed approach would allow residents and program directors to pace training based on individual capabilities. Standards for resident education should include the average number of patients treated in each setting (inpatient or outpatient). A possible set of "multipliers" based on these parameters, and possibly others such as resident evaluation, is devised to calculate the "final adjusted accredited hours" that count toward graduation. Substituting "years of training" with "hours of training" may resolve many of the concerns with the current residency education model, as well as adapt to the demands of residents' personal lives. It also may allow residents to pace their training according to their capabilities and learning styles, and contribute to reflective learning and better quality education.

  5. Plan delivery quality assurance for CyberKnife: Statistical process control analysis of 350 film-based patient-specific QAs.

    PubMed

    Bellec, J; Delaby, N; Jouyaux, F; Perdrieux, M; Bouvier, J; Sorel, S; Henry, O; Lafond, C

    2017-07-01

    Robotic radiosurgery requires plan delivery quality assurance (DQA) but there has never been a published comprehensive analysis of a patient-specific DQA process in a clinic. We proposed to evaluate 350 consecutive film-based patient-specific DQAs using statistical process control. We evaluated the performance of the process to propose achievable tolerance criteria for DQA validation and we sought to identify suboptimal DQA using control charts. DQAs were performed on a CyberKnife-M6 using Gafchromic-EBT3 films. The signal-to-dose conversion was performed using a multichannel-correction and a scanning protocol that combined measurement and calibration in a single scan. The DQA analysis comprised a gamma-index analysis at 3%/1.5mm and a separate evaluation of spatial and dosimetric accuracy of the plan delivery. Each parameter was plotted on a control chart and control limits were calculated. A capability index (Cpm) was calculated to evaluate the ability of the process to produce results within specifications. The analysis of capability showed that a gamma pass rate of 85% at 3%/1.5mm was highly achievable as acceptance criteria for DQA validation using a film-based protocol (Cpm>1.33). 3.4% of DQA were outside a control limit of 88% for gamma pass-rate. The analysis of the out-of-control DQA helped identify a dosimetric error in our institute for a specific treatment type. We have defined initial tolerance criteria for DQA validations. We have shown that the implementation of a film-based patient-specific DQA protocol with the use of control charts is an effective method to improve patient treatment safety on CyberKnife. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. MODTRAN3: Suitability as a flux-divergence code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, G.P.; Chetwynd, J.H.; Wang, J.

    1995-04-01

    The Moderate Resolution Atmospheric Radiance and Transmittance Model (MODTRAN3) is the developmental version of MODTRAN and MODTRAN2. The Geophysics Directorate, Phillips Laboratory, released a beta version of this model in October 1994. It encompasses all the capabilities of LOWTRAN7, the historic 20 cm{sup -1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup -1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Validation against full Voigt line-by-line calculations (e.g., FASCODE) has shown excellent agreement. In addition,more » simple timing runs demonstrate potential improvement of more than a factor of 100 for a typical 500 cm{sup -1} spectral interval and comparable vertical layering. Not only is MODTRAN an excellent band model for {open_quotes}full path{close_quotes} calculations (that is, radiance and/or transmittance from point A to point B), but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard spectrally integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies of both standard and trace atmospheric species.« less

  7. Accelerator shield design of KIPT neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Gohar, Y.

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generatedmore » by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary at less than 0.5-mrem/hr. The shield configuration and parameters of the accelerator building have been determined and are presented in this paper. (authors)« less

  8. ZMOTTO- MODELING THE INTERNAL COMBUSTION ENGINE

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.

    1994-01-01

    The ZMOTTO program was developed to model mathematically a spark-ignited internal combustion engine. ZMOTTO is a large, general purpose program whose calculations can be established at five levels of sophistication. These five models range from an ideal cycle requiring only thermodynamic properties, to a very complex representation demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. ZMOTTO is a flexible and computationally economical program based on a system of ordinary differential equations for cylinder-averaged properties. The calculations assume that heat transfer is expressed in terms of a heat transfer coefficient and that the cylinder average of kinetic plus potential energies remains constant. During combustion, the pressures of burned and unburned gases are assumed equal and their heat transfer areas are assumed proportional to their respective mass fractions. Even the simplest ZMOTTO model provides for residual gas effects, spark advance, exhaust gas recirculation, supercharging, and throttling. In the more complex models, 1) finite rate chemistry replaces equilibrium chemistry in descriptions of both the flame and the burned gases, 2) poppet valve formulas represent fluid flow instead of a zero pressure drop flow, and 3) flame propagation is modeled by mass burning equations instead of as an instantaneous process. Input to ZMOTTO is determined by the model chosen. Thermodynamic data is required for all models. Transport properties and chemical kinetics data are required only as the model complexity grows. Other input includes engine geometry, working fluid composition, operating characteristics, and intake/exhaust data. ZMOTTO accommodates a broad spectrum of reactants. The program will calculate many Otto cycle performance parameters for a number of consecutive cycles (a cycle being an interval of 720 crankangle degrees). A typical case will have a number of initial ideal cycles and progress through levels of nonideal cycles. ZMOTTO has restart capabilities and permits multicycle calculations with parameters varying from cycle to cycle. ZMOTTO is written in FORTRAN IV (IBM Level H) but has also been compiled with IBM VSFORTRAN (1977 standard). It was developed on an IBM 3033 under the TSS operating system and has also been implemented under MVS. Approximately 412K of 8 bit bytes of central memory are required in a nonpaging environment. ZMOTTO was developed in 1985.

  9. Dependence of Excited State Potential Energy Surfaces on the Spatial Overlap of the Kohn-Sham Orbitals and the Amount of Nonlocal Hartree-Fock Exchange in Time-Dependent Density Functional Theory.

    PubMed

    Plötner, Jürgen; Tozer, David J; Dreuw, Andreas

    2010-08-10

    Time-dependent density functional theory (TDDFT) with standard GGA or hybrid exchange-correlation functionals is not capable of describing the potential energy surface of the S1 state of Pigment Yellow 101 correctly; an additional local minimum is observed at a twisted geometry with substantial charge transfer (CT) character. To investigate the influence of nonlocal exact orbital (Hartree-Fock) exchange on the shape of the potential energy surface of the S1 state in detail, it has been computed along the twisting coordinate employing the standard BP86, B3LYP, and BHLYP xc-functionals as well as the long-range separated (LRS) exchange-correlation (xc)-functionals LC-BOP, ωB97X, ωPBE, and CAM-B3LYP and compared to RI-CC2 benchmark results. Additionally, a recently suggested Λ-parameter has been employed that measures the amount of CT in an excited state by calculating the spatial overlap of the occupied and virtual molecular orbitals involved in the transition. Here, the error in the calculated S1 potential energy curves at BP86, B3LYP, and BHLYP can be clearly related to the Λ-parameter, i.e., to the extent of charge transfer. Additionally, it is demonstrated that the CT problem is largely alleviated when the BHLYP xc-functional is employed, although it still exhibits a weak tendency to underestimate the energy of CT states. The situation improves drastically when LRS-functionals are employed within TDDFT excited state calculations. All tested LRS-functionals give qualitatively the correct potential energy curves of the energetically lowest excited states of P. Y. 101 along the twisting coordinate. While LC-BOP and ωB97X overcorrect the CT problem and now tend to give too large excitation energies compared to other non-CT states, ωPBE and CAM-B3LYP are in excellent agreement with the RI-CC2 results, with respect to both the correct shape of the potential energy curve as well as the absolute values of the calculated excitation energies.

  10. A kinetic and thermochemical database for organic sulfur and oxygen compounds.

    PubMed

    Class, Caleb A; Aguilera-Iparraguirre, Jorge; Green, William H

    2015-05-28

    Potential energy surfaces and reaction kinetics were calculated for 40 reactions involving sulfur and oxygen. This includes 11 H2O addition, 8 H2S addition, 11 hydrogen abstraction, 7 beta scission, and 3 elementary tautomerization reactions, which are potentially relevant in the combustion and desulfurization of sulfur compounds found in various fuel sources. Geometry optimizations and frequencies were calculated for reactants and transition states using B3LYP/CBSB7, and potential energies were calculated using CBS-QB3 and CCSD(T)-F12a/VTZ-F12. Rate coefficients were calculated using conventional transition state theory, with corrections for internal rotations and tunneling. Additionally, thermochemical parameters were calculated for each of the compounds involved in these reactions. With few exceptions, rate parameters calculated using the two potential energy methods agreed reasonably, with calculated activation energies differing by less than 5 kJ mol(-1). The computed rate coefficients and thermochemical parameters are expected to be useful for kinetic modeling.

  11. Aromaticity Parameters in Asphalt Binders Calculated From Profile Fitting X-ray Line Spectra Using Pearson VII and Pseudo-Voigt Functions

    NASA Astrophysics Data System (ADS)

    Shirokoff, J.; Lewis, J. Courtenay

    2010-10-01

    The aromaticity and crystallite parameters in asphalt binders are calculated from data obtained after profile fitting x-ray line spectra using Pearson VII and pseudo-Voigt functions. The results are presented and discussed in terms of the peak profile fit parameters used, peak deconvolution procedure, and differences in calculated values that can arise owing to peak shape and additional peaks present in the pattern. These results have implications concerning the evaluation and performance of asphalt binders used in highways and road applications.

  12. Studies on transonic Double Circular Arc (DCA) profiles of axial flow compressor calculations of profile design

    NASA Astrophysics Data System (ADS)

    Rugun, Y.; Zhaoyan, Q.

    1986-05-01

    In this paper, the concepts and methods for design of high-Mach-number airfoils of axial flow compressor are described. The correlation-equations of main parameters such as geometries of airfoil and cascade, stream parameters and wake characteristic parameters of compressor are provided. For obtaining the total pressure loss coefficients of cascade and adopting the simplified calculating method, several curves and charts are provided by authors. The testing results and calculating values are compared, and both the results are in better agreement.

  13. Formulation of a parametric systems design framework for disaster response planning

    NASA Astrophysics Data System (ADS)

    Mma, Stephanie Weiya

    The occurrence of devastating natural disasters in the past several years have prompted communities, responding organizations, and governments to seek ways to improve disaster preparedness capabilities locally, regionally, nationally, and internationally. A holistic approach to design used in the aerospace and industrial engineering fields enables efficient allocation of resources through applied parametric changes within a particular design to improve performance metrics to selected standards. In this research, this methodology is applied to disaster preparedness, using a community's time to restoration after a disaster as the response metric. A review of the responses from Hurricane Katrina and the 2010 Haiti earthquake, among other prominent disasters, provides observations leading to some current capability benchmarking. A need for holistic assessment and planning exists for communities but the current response planning infrastructure lacks a standardized framework and standardized assessment metrics. Within the humanitarian logistics community, several different metrics exist, enabling quantification and measurement of a particular area's vulnerability. These metrics, combined with design and planning methodologies from related fields, such as engineering product design, military response planning, and business process redesign, provide insight and a framework from which to begin developing a methodology to enable holistic disaster response planning. The developed methodology was applied to the communities of Shelby County, TN and pre-Hurricane-Katrina Orleans Parish, LA. Available literature and reliable media sources provide information about the different values of system parameters within the decomposition of the community aspects and also about relationships among the parameters. The community was modeled as a system dynamics model and was tested in the implementation of two, five, and ten year improvement plans for Preparedness, Response, and Development capabilities, and combinations of these capabilities. For Shelby County and for Orleans Parish, the Response improvement plan reduced restoration time the most. For the combined capabilities, Shelby County experienced the greatest reduction in restoration time with the implementation of Development and Response capability improvements, and for Orleans Parish it was the Preparedness and Response capability improvements. Optimization of restoration time with community parameters was tested by using a Particle Swarm Optimization algorithm. Fifty different optimized restoration times were generated using the Particle Swarm Optimization algorithm and ranked using the Technique for Order Preference by Similarity to Ideal Solution. The optimization results indicate that the greatest reduction in restoration time for a community is achieved with a particular combination of different parameter values instead of the maximization of each parameter.

  14. Numerical Study of the Features of Ti-Nb Alloy Crystallization during Selective Laser Sintering

    NASA Astrophysics Data System (ADS)

    Dmitriev, A. I.; Nikonov, A. Y.

    2016-07-01

    The demand for implants with individual shape requires the development of new methods and approaches to their production. The obvious advantages of additive technologies and selective laser sintering are the capabilities to form both the external shape of the product and its internal structure. Recently appeared and attractive from the perspective of biomechanical compatibility are beta alloys of titanium-niobium that have similar mechanical properties to those of cortical bone. This paper studies the processes occurring at different stages of laser sintering using computer simulation on atomic scale. The effect of cooling rate on the resulting crystal structure of Ti-Nb alloy was analysed. Also, the dependence of tensile strength of sintered particles on heating time and cooling rate was studied. It was shown that the main parameter, which determines the adhesive properties of sintered particles, is the contact area obtained during sintering process. The simulation results can both help defining the technological parameters of the process to provide the desired mechanical properties of the resulting products and serve as a necessary basis for calculations on large scale levels in order to study the behaviour of actually used implants.

  15. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  16. Master-slave system with force feedback based on dynamics of virtual model

    NASA Technical Reports Server (NTRS)

    Nojima, Shuji; Hashimoto, Hideki

    1994-01-01

    A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.

  17. Fast reactions of aluminum and explosive decomposition products in a post-detonation environment

    NASA Astrophysics Data System (ADS)

    Tappan, Bryce C.; Manner, Virginia W.; Lloyd, Joseph M.; Pemberton, Steven J.

    2012-03-01

    In order to determine the reaction behavior of Al in RDX or HMX/cast-cured binder formulations shortly after the passage of the detonation, a series of cylinder tests was performed on formulations comprising of varying binder systems and either 3.5 μm spherical Al or LiF (an inert salt with a similar molecular weight and density to Al). In these studies, both detonation velocity and cylinder expansion velocity are measured in order to determine exactly how and when Al contributes to the explosive event, particularly in the presence of oxidizing/energetic binders. The U.S. Army Research, Development and Engineering Laboratory at Picatinny have recently coined the term "combined effects" explosives for materials such as these; as they demonstrate both high metal pushing capability and high blast ability. This study is aimed at developing a fundamental understanding of the reaction of Al with explosives decomposition products, where both the detonation and early post-detonation environment are analyzed. Reaction rates of Al metal are investigated via comparison of predicted performance based on thermoequilibrium calculations. The detonation velocities, wall velocities, and parameters at the CJ plane are some of the parameters that will be discussed.

  18. A similarity based learning framework for interim analysis of outcome prediction of acupuncture for neck pain.

    PubMed

    Zhang, Gang; Liang, Zhaohui; Yin, Jian; Fu, Wenbin; Li, Guo-Zheng

    2013-01-01

    Chronic neck pain is a common morbid disorder in modern society. Acupuncture has been administered for treating chronic pain as an alternative therapy for a long time, with its effectiveness supported by the latest clinical evidence. However, the potential effective difference in different syndrome types is questioned due to the limits of sample size and statistical methods. We applied machine learning methods in an attempt to solve this problem. Through a multi-objective sorting of subjective measurements, outstanding samples are selected to form the base of our kernel-oriented model. With calculation of similarities between the concerned sample and base samples, we are able to make full use of information contained in the known samples, which is especially effective in the case of a small sample set. To tackle the parameters selection problem in similarity learning, we propose an ensemble version of slightly different parameter setting to obtain stronger learning. The experimental result on a real data set shows that compared to some previous well-known methods, the proposed algorithm is capable of discovering the underlying difference among different syndrome types and is feasible for predicting the effective tendency in clinical trials of large samples.

  19. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  20. Thermal affected zone obtained in machining steel XC42 by high-power continuous CO 2 laser

    NASA Astrophysics Data System (ADS)

    Jebbari, Neila; Jebari, Mohamed Mondher; Saadallah, Faycal; Tarrats-Saugnac, Annie; Bennaceur, Raouf; Longuemard, Jean Paul

    2008-09-01

    A high-power continuous CO 2 laser (4 kW) can provide energy capable of causing melting or even, with a special treatment of the surface, vaporization of an XC42-steel sample. The laser-metal interaction causes an energetic machining mechanism, which takes place according to the assumption that the melting front precedes the laser beam, such that the laser beam interacts with a preheated surface whose temperature is near the melting point. The proposed model, obtained from the energy balance during the interaction time, concerns the case of machining with an inert gas jet and permits the calculation of the characteristic parameters of the groove according to the characteristic laser parameters (absorbed laser energy and impact diameter of the laser beam) and allows the estimation of the quantity of the energy causing the thermal affected zone (TAZ). This energy is equivalent to the heat quantity that must be injected in the heat propagation equation. In the case of a semi-infinite medium with fusion temperature at the surface, the resolution of the heat propagation equation gives access to the width of the TAZ.

  1. Application of unsteady aeroelastic analysis techniques on the national aerospace plane

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Spain, Charles V.; Soistmann, David L.; Noll, Thomas E.

    1988-01-01

    A presentation provided at the Fourth National Aerospace Plane Technology Symposium held in Monterey, California, in February 1988 is discussed. The objective is to provide current results of ongoing investigations to develop a methodology for predicting the aerothermoelastic characteristics of NASP-type (hypersonic) flight vehicles. Several existing subsonic and supersonic unsteady aerodynamic codes applicable to the hypersonic class of flight vehicles that are generally available to the aerospace industry are described. These codes were evaluated by comparing calculated results with measured wind-tunnel aeroelastic data. The agreement was quite good in the subsonic speed range but showed mixed agreement in the supersonic range. In addition, a future endeavor to extend the aeroelastic analysis capability to hypersonic speeds is outlined. An investigation to identify the critical parameters affecting the aeroelastic characteristics of a hypersonic vehicle, to define and understand the various flutter mechanisms, and to develop trends for the important parameters using a simplified finite element model of the vehicle is summarized. This study showed the value of performing inexpensive and timely aeroelastic wind-tunnel tests to expand the experimental data base required for code validation using simple to complex models that are representative of the NASP configurations and root boundary conditions are discussed.

  2. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  3. Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggiora, R.; Milanesio, D.; Vecchi, G.

    2009-11-26

    TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less

  4. Can we trust the calculation of texture indices of CT images? A phantom study.

    PubMed

    Caramella, Caroline; Allorant, Adrien; Orlhac, Fanny; Bidault, Francois; Asselain, Bernard; Ammari, Samy; Jaranowski, Patricia; Moussier, Aurelie; Balleyguier, Corinne; Lassau, Nathalie; Pitre-Champagnat, Stephanie

    2018-04-01

    Texture analysis is an emerging tool in the field of medical imaging analysis. However, many issues have been raised in terms of its use in assessing patient images and it is crucial to harmonize and standardize this new imaging measurement tool. This study was designed to evaluate the reliability of texture indices of CT images on a phantom including a reproducibility study, to assess the discriminatory capacity of indices potentially relevant in CT medical images and to determine their redundancy. For the reproducibility and discriminatory analysis, eight identical CT acquisitions were performed on a phantom including one homogeneous insert and two close heterogeneous inserts. Texture indices were selected for their high reproducibility and capability of discriminating different textures. For the redundancy analysis, 39 acquisitions of the same phantom were performed using varying acquisition parameters and a correlation matrix was used to explore the 2 × 2 relationships. LIFEx software was used to explore 34 different parameters including first order and texture indices. Only eight indices of 34 exhibited high reproducibility and discriminated textures from each other. Skewness and kurtosis from histogram were independent from the six other indices but were intercorrelated, the other six indices correlated in diverse degrees (entropy, dissimilarity, and contrast of the co-occurrence matrix, contrast of the Neighborhood Gray Level difference matrix, SZE, ZLNU of the Gray-Level Size Zone Matrix). Care should be taken when using texture analysis as a tool to characterize CT images because changes in quantitation may be primarily due to internal variability rather than from real physio-pathological effects. Some textural indices appear to be sufficiently reliable and capable to discriminate close textures on CT images. © 2018 American Association of Physicists in Medicine.

  5. Dynamic contrast-enhanced MRI versus 18F-FDG PET/CT: Which is better in differentiation between malignant and benign solitary pulmonary nodules?

    PubMed

    Feng, Feng; Qiang, Fulin; Shen, Aijun; Shi, Donghui; Fu, Aiyan; Li, Haiming; Zhang, Mingzhu; Xia, Ganlin; Cao, Peng

    2018-02-01

    To prospectively compare the discriminative capacity of dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) with that of 18 F-fluorodeoxyglucose ( 18 F-FDG) positron emission tomography/computed tomography (PET/CT) in the differentiation of malignant and benign solitary pulmonary nodules (SPNs). Forty-nine patients with SPNs were included in this prospective study. Thirty-two of the patients had malignant SPNs, while the other 17 had benign SPNs. All these patients underwent DCE-MRI and 18 F-FDG PET/CT examinations. The quantitative MRI pharmacokinetic parameters, including the trans-endothelial transfer constant (K trans ), redistribution rate constant (K ep ), and fractional volume (V e ), were calculated using the Extended-Tofts Linear two-compartment model. The 18 F-FDG PET/CT parameter, maximum standardized uptake value (SUV max ), was also measured. Spearman's correlations were calculated between the MRI pharmacokinetic parameters and the SUV max of each SPN. These parameters were statistically compared between the malignant and benign nodules. Receiver operating characteristic (ROC) analyses were used to compare the diagnostic capability between the DCE-MRI and 18 F-FDG PET/CT indexes. Positive correlations were found between K trans and SUV max , and between K ep and SUV max (P<0.05). There were significant differences between the malignant and benign nodules in terms of the K trans , K ep and SUV max values (P<0.05). The areas under the ROC curve (AUC) of K trans , K ep and SUV max between the malignant and benign nodules were 0.909, 0.838 and 0.759, respectively. The sensitivity and specificity in differentiating malignant from benign SPNs were 90.6% and 82.4% for K trans ; 87.5% and 76.5% for K ep ; and 75.0% and 70.6% for SUV max , respectively. The sensitivity and specificity of K trans and K ep were higher than those of SUV max , but there was no significant difference between them (P>0.05). DCE-MRI can be used to differentiate between benign and malignant SPNs and has the advantage of being radiation free.

  6. Dynamic contrast-enhanced MRI versus 18F-FDG PET/CT: Which is better in differentiation between malignant and benign solitary pulmonary nodules?

    PubMed Central

    Feng, Feng; Qiang, Fulin; Shen, Aijun; Shi, Donghui; Fu, Aiyan; Li, Haiming; Zhang, Mingzhu; Xia, Ganlin; Cao, Peng

    2018-01-01

    Objective To prospectively compare the discriminative capacity of dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) with that of 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) in the differentiation of malignant and benign solitary pulmonary nodules (SPNs). Methods Forty-nine patients with SPNs were included in this prospective study. Thirty-two of the patients had malignant SPNs, while the other 17 had benign SPNs. All these patients underwent DCE-MRI and 18F-FDG PET/CT examinations. The quantitative MRI pharmacokinetic parameters, including the trans-endothelial transfer constant (Ktrans), redistribution rate constant (Kep), and fractional volume (Ve), were calculated using the Extended-Tofts Linear two-compartment model. The 18F-FDG PET/CT parameter, maximum standardized uptake value (SUVmax), was also measured. Spearman’s correlations were calculated between the MRI pharmacokinetic parameters and the SUVmax of each SPN. These parameters were statistically compared between the malignant and benign nodules. Receiver operating characteristic (ROC) analyses were used to compare the diagnostic capability between the DCE-MRI and 18F-FDG PET/CT indexes. Results Positive correlations were found between Ktrans and SUVmax, and between Kep and SUVmax (P<0.05). There were significant differences between the malignant and benign nodules in terms of the Ktrans, Kep and SUVmax values (P<0.05). The areas under the ROC curve (AUC) of Ktrans, Kep and SUVmax between the malignant and benign nodules were 0.909, 0.838 and 0.759, respectively. The sensitivity and specificity in differentiating malignant from benign SPNs were 90.6% and 82.4% for Ktrans; 87.5% and 76.5% for Kep; and 75.0% and 70.6% for SUVmax, respectively. The sensitivity and specificity of Ktrans and Kep were higher than those of SUVmax, but there was no significant difference between them (P>0.05). Conclusions DCE-MRI can be used to differentiate between benign and malignant SPNs and has the advantage of being radiation free. PMID:29545716

  7. Performance evaluation and optimization of the MiniPET-II scanner

    NASA Astrophysics Data System (ADS)

    Lajtos, Imre; Emri, Miklos; Kis, Sandor A.; Opposits, Gabor; Potari, Norbert; Kiraly, Beata; Nagy, Ferenc; Tron, Lajos; Balkay, Laszlo

    2013-04-01

    This paper presents results of the performance of a small animal PET system (MiniPET-II) installed at our Institute. MiniPET-II is a full ring camera that includes 12 detector modules in a single ring comprised of 1.27×1.27×12 mm3 LYSO scintillator crystals. The axial field of view and the inner ring diameter are 48 mm and 211 mm, respectively. The goal of this study was to determine the NEMA-NU4 performance parameters of the scanner. In addition, we also investigated how the calculated parameters depend on the coincidence time window (τ=2, 3 and 4 ns) and the low threshold settings of the energy window (Elt=250, 350 and 450 keV). Independent measurements supported optimization of the effective system radius and the coincidence time window of the system. We found that the optimal coincidence time window and low threshold energy window are 3 ns and 350 keV, respectively. The spatial resolution was close to 1.2 mm in the center of the FOV with an increase of 17% at the radial edge. The maximum value of the absolute sensitivity was 1.37% for a point source. Count rate tests resulted in peak values for the noise equivalent count rate (NEC) curve and scatter fraction of 14.2 kcps (at 36 MBq) and 27.7%, respectively, using the rat phantom. Numerical values of the same parameters obtained for the mouse phantom were 55.1 kcps (at 38.8 MBq) and 12.3%, respectively. The recovery coefficients of the image quality phantom ranged from 0.1 to 0.87. Altering the τ and Elt resulted in substantial changes in the NEC peak and the sensitivity while the effect on the image quality was negligible. The spatial resolution proved to be, as expected, independent of the τ and Elt. The calculated optimal effective system radius (resulting in the best image quality) was 109 mm. Although the NEC peak parameters do not compare favorably with those of other small animal scanners, it can be concluded that under normal counting situations the MiniPET-II imaging capability assures remarkably good image quality, sensitivity and spatial resolution.

  8. Analysis and Optimization of the Recovered ESA Huygens Mission

    NASA Astrophysics Data System (ADS)

    Kazeminejad, Bobby

    2002-06-01

    The Huygens Probe is the ESA-provided element of the joint NASA/ESA Cassini - Huygens mission to Saturn and Titan. A recently discovered design flaw in the Huygens radio receiver onboard Cassini led to a significantly different mission geometry, redesigned and implemented by both the ESA Huygens and NASA Cassini project teams. A numerical integration of the Orbiter trajectory and the Huygens descent profile with simplified assumptions for Probe attitude and correlated aerodynamic aspects offered the opportunity to re-calculate key mission parameters, which depend on the relative geometry and motion of the bodies. This was a crucial step to assess whether science-imposed constraints were not violated. A review of existing Titan wind and atmosphere models and their physical background led to a subsequent parametric study of their impact on the supersonic entry phase, the parachute descent and finally the bodyfixed landing coordinates of the Probe. In addition to the deterministic (nominal) Probe trajectory, it is important to quantify the influence of various uncertainties that enter into the equations of motion on the results (e.g., state vectors, physical parameters of the environment and the Probe itself). This was done by propagating the system covariance matrix together with the nominal state vectors. A sophisticated Monte Carlo technique developed to save up computation time was then used to determine statistical percentiles of the key parameters. The Probe Orbiter link geometry was characterized by evaluating the link budget and received frequency at receiver level. In this calculation the spin of the Probe and the asymmetric gain pattern of the transmitting antennas was taken into account. The results were then used in a mathematical model that describes the tracking capability of the receiver symbol synchronizer. This allowed the loss of data during the mission to be quantified. A subsequent parametric study of different sets of mission parameters with the goal of minimizing the data losses and maximizing the overall mission robustness resulted in the recommendation to change the flyby altitude of the Orbiter from 65,000 km down to 60,000 km.

  9. Linewidth and lineshift parameters of rotation-vibration transitions of linear molecule perturbed by inert gas

    NASA Astrophysics Data System (ADS)

    Johri, Manoj; Johri, Gajendra K.; Rishishwar, Rajendra P.

    1990-12-01

    The study of spectral lineshape is important to understand intermolecular forces1-5. We have calculated the linewidth and the lineshift for different rotation-vibration transitions of linear molecules (CO and HCl) perturbed by argon using generalized interaction potential4. The Murphy Boggs6 (MB), Mehrotra Boggs7 and perturbation theories have been used for the linewidth calculation. The lineshift parameters have been calculated using the MEB theory7 including the phase shift effect and ignoring Ji=Ji and Jf=Jf transitions. In these calculation the variation of the rotational constant with the vibrational quantum number has been taken into account. The calculated lineshift parameters decrease with an increase in the initial rotation quamtum numbers (Ji). It remains positive for the lower values of Ji and becomes negative for the higher values of Ji where as the measured8 values are negative for all the transitions. The calculated linewidth parameters using the MEB theory7 are lower by about 15% than the measured values for CO-A collisions. The vibrational dependence in CO-A collisions show significant change in the lineshift. For H Cl-A collisions the discrepancy between the calculated lienwidth parameters using the Mehrotra Boggs theory and the measured9 values is about 46% for J=0-1 transitions and decreases to 22% for J=8-9 transition. The results of the perturbation theory do not show regular variation of the linewidth parameters with the rotational state. The linewidth parameters using the Murphy Boggs theory are lower than the measured9 values by about 50% for all the transitions considered. It is found that the contribution of the diabetic collisions is important as included in the perturbtive and the Mehrotra Boggs approaches. Further, if the pressure broadening method is used to probe anisotropy of the intermolecular forces, there is need of modifying the existing theoretical models and the experimental techniques.

  10. The MSRC ab initio methods benchmark suite: A measurement of hardware and software performance in the area of electronic structure methods

    NASA Astrophysics Data System (ADS)

    Feller, D. F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  11. Spectral analysis of shielded gamma ray sources using precalculated library data

    NASA Astrophysics Data System (ADS)

    Holmes, Thomas Wesley; Gardner, Robin P.

    2015-11-01

    In this work, an approach has been developed for determining the intensity of a shielded source by first determining the thicknesses of three different shielding materials from a passively collected gamma-ray spectrum by making comparisons with predetermined shielded spectra. These evaluations are dependent on the accuracy and validity of the predetermined library spectra which were created by changing the thicknesses of the three chosen materials lead, aluminum and wood that are used to simulate any actual shielding. Each of the spectra produced was generated using MCNP5 with a sufficiently large number of histories to ensure a low relative error at each channel. The materials were held in the same respective order from source to detector, where each material consisted of three individual thicknesses and a null condition. This then produced two separate data sets of 27 total shielding material situations and subsequent predetermined libraries that were created for each radionuclide source used. The technique used to calculate the thicknesses of the materials implements a Levenberg-Marquardt nonlinear search that employs a tri-linear interpolation with the respective predetermined libraries within each channel for the supplied input unknown spectrum. Given that the nonlinear parameters require an initial guess for the calculations, the approach demonstrates first that when the correct values are input, the correct thicknesses are found. It then demonstrates that when multiple trials of random values are input for each of the nonlinear parameters, the average of the calculated solutions that successfully converges also produced the correct thicknesses. Under situations with sufficient information known about the detection situation at hand, the method was shown to behave in a manner that produces reasonable results and can serve as a good preliminary solution. This technique has the capability to be used in a variety of full spectrum inverse analysis problems including homeland security issues.

  12. Building a Predictive Capability for Decision-Making that Supports MultiPEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua Daniel

    Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.

  13. Annual Fuze Conference and Munitions Technology Symposium VI (43rd)

    DTIC Science & Technology

    1999-04-07

    part manufacture and assembly and identify the parameters that we must control through production. Analyzing the coefficients of variation and the...processing energetic materials. The extruder is equipped with four independent temperature control zones, segmented screws, a jacketed die block capable of...and has vacuum capability. Data monitoring capabilities include melt temperature and pressure, torque, screw speed, and temperatures in all of the

  14. Real-Gas Correction Factors for Hypersonic Flow Parameters in Helium

    NASA Technical Reports Server (NTRS)

    Erickson, Wayne D.

    1960-01-01

    The real-gas hypersonic flow parameters for helium have been calculated for stagnation temperatures from 0 F to 600 F and stagnation pressures up to 6,000 pounds per square inch absolute. The results of these calculations are presented in the form of simple correction factors which must be applied to the tabulated ideal-gas parameters. It has been shown that the deviations from the ideal-gas law which exist at high pressures may cause a corresponding significant error in the hypersonic flow parameters when calculated as an ideal gas. For example the ratio of the free-stream static to stagnation pressure as calculated from the thermodynamic properties of helium for a stagnation temperature of 80 F and pressure of 4,000 pounds per square inch absolute was found to be approximately 13 percent greater than that determined from the ideal-gas tabulation with a specific heat ratio of 5/3.

  15. Equilibrium cycle pin by pin transport depletion calculations with DeCART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kochunas, B.; Downar, T.; Taiwo, T.

    As the Advanced Fuel Cycle Initiative (AFCI) program has matured it has become more important to utilize more advanced simulation methods. The work reported here was performed as part of the AFCI fellowship program to develop and demonstrate the capability of performing high fidelity equilibrium cycle calculations. As part of the work here, a new multi-cycle analysis capability was implemented in the DeCART code which included modifying the depletion modules to perform nuclide decay calculations, implementing an assembly shuffling pattern description, and modifying iteration schemes. During the work, stability issues were uncovered with respect to converging simultaneously the neutron flux,more » isotopics, and fluid density and temperature distributions in 3-D. Relaxation factors were implemented which considerably improved the stability of the convergence. To demonstrate the capability two core designs were utilized, a reference UOX core and a CORAIL core. Full core equilibrium cycle calculations were performed on both cores and the discharge isotopics were compared. From this comparison it was noted that the improved modeling capability was not drastically different in its prediction of the discharge isotopics when compared to 2-D single assembly or 2-D core models. For fissile isotopes such as U-235, Pu-239, and Pu-241 the relative differences were 1.91%, 1.88%, and 0.59%), respectively. While this difference may not seem large it translates to mass differences on the order of tens of grams per assembly, which may be significant for the purposes of accounting of special nuclear material. (authors)« less

  16. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  17. Kinematic analysis of crank -cam mechanism of process equipment

    NASA Astrophysics Data System (ADS)

    Podgornyj, Yu I.; Skeeba, V. Yu; Martynova, T. G.; Pechorkina, N. S.; Skeeba, P. Yu

    2018-03-01

    This article discusses how to define the kinematic parameters of a crank-cam mechanism. Using the mechanism design, the authors have developed a calculation model and a calculation algorithm that allowed the definition of kinematic parameters of the mechanism, including crank displacements, angular velocities and acceleration, as well as driven link (rocker arm) angular speeds and acceleration. All calculations were performed using the Mathcad mathematical package. The results of the calculations are reported as numerical values.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Selle, J.E.

    A modification was made to the Kaufman method of calculating binary phase diagrams to permit calculation of intra-rare earth diagrams. Atomic volumes for all phases, real or hypothetical, are necessary to determine interaction parameters for calculation of complete diagrams. The procedures used to determine unknown atomic volumes are describes. Also, procedures are described for determining lattice stability parameters for unknown transformations. Results are presented on the calculation of intra-rare earth diagrams between both trivalent and divalent rare earths. 13 refs., 36 figs., 11 tabs.

  19. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  20. A comparative study of gamma-ray interaction and absorption in some building materials using Zeff-toolkit

    NASA Astrophysics Data System (ADS)

    Mann, Kulwinder Singh; Heer, Manmohan Singh; Rani, Asha

    2016-07-01

    The gamma-ray shielding behaviour of a material can be investigated by determining its various interaction and energy-absorption parameters (such as mass attenuation coefficients, mass energy absorption coefficients, and corresponding effective atomic numbers and electron densities). Literature review indicates that the effective atomic number (Zeff) has been used as extensive parameters for evaluating the effects and defect in the chosen materials caused by ionising radiations (X-rays and gamma-rays). A computer program (Zeff-toolkit) has been designed for obtaining the mean value of effective atomic number calculated by three different methods. A good agreement between the results obtained with Zeff-toolkit, Auto_Zeff software and experimentally measured values of Zeff has been observed. Although the Zeff-toolkit is capable of computing effective atomic numbers for both photon interaction (Zeff,PI) and energy absorption (Zeff,En) using three methods in each. No similar computer program is available in the literature which simultaneously computes these parameters simultaneously. The computed parameters have been compared and correlated in the wide energy range (0.001-20 MeV) for 10 commonly used building materials. The prominent variations in these parameters with gamma-ray photon energy have been observed due to the dominance of various absorption and scattering phenomena. The mean values of two effective atomic numbers (Zeff,PI and Zeff,En) are equivalent at energies below 0.002 MeV and above 0.3 MeV, indicating the dominance of gamma-ray absorption (photoelectric and pair production) over scattering (Compton) at these energies. Conversely in the energy range 0.002-0.3 MeV, the Compton scattering of gamma-rays dominates the absorption. From the 10 chosen samples of building materials, 2 soils showed better shielding behaviour than did other 8 materials.

Top