Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Evaluating Uncertainty in Integrated Environmental Models: A Review of Concepts and Tools
This paper reviews concepts for evaluating integrated environmental models and discusses a list of relevant software-based tools. A simplified taxonomy for sources of uncertainty and a glossary of key terms with standard definitions are provided in the context of integrated appro...
Watershed nitrogen and phosphorus balance: The upper Potomac River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaworski, N.A.; Groffman, P.M.; Keller, A.A.
1992-01-01
Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less
Source-term development for a contaminant plume for use by multimedia risk assessment models
NASA Astrophysics Data System (ADS)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.
2000-02-01
Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Greulich, Christopher
A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Shinn, J. L.
1986-01-01
Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.
A microchip laser source with stable intensity and frequency used for self-mixing interferometry.
Zhang, Shaohui; Zhang, Shulian; Tan, Yidong; Sun, Liqun
2016-05-01
We present a stable 40 × 40 × 30 mm(3) Laser-diode (LD)-pumped-microchip laser (ML) laser source used for self-mixing interferometry which can measure non-cooperative targets. We simplify the coupling process of pump light in order to make its polarization and intensity robust against environmental disturbance. Thermal frequency stabilization technology is used to stabilize the laser frequency of both LD and ML. Frequency stability of about 1 × 10(-7) and short-term intensity fluctuation of 0.1% are achieved. The theoretical long-term displacement accuracy limited by frequency and intensity fluctuation is about 10 nm when the measuring range is 0.1 m. The line-width of this laser is about 25 kHz corresponding to 12 km coherent length and 6 km measurement range for self-mixing interference. The laser source has been equipped to a self-mixing interferometer, and it works very well.
1994-12-01
be INTRODUCTION familiar: best value source selection, processes and metrics In simplified terms, acquisition and continuous improvement ; of a training ...pro- continuous improvement , MIL-STD- posed processes and metrics are 1379D, the systems approach to placed in the contract in a training , concurrent...identification and 5 Continuous Process Improvement correction of errors are critical to software product 6 Training correctness and quality. Correcting
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.
Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel
2004-06-21
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
NASA Astrophysics Data System (ADS)
Sarmah, Ratan; Tiwari, Shubham
2018-03-01
An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE
NASA Astrophysics Data System (ADS)
Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel
2004-06-01
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
JET DT Scenario Extrapolation and Optimization with METIS
NASA Astrophysics Data System (ADS)
Urban, Jakub; Jaulmes, Fabien; Artaud, Jean-Francois
2017-10-01
Prospective JET (Joint European Torus) DT operation scenarios are modelled by the fast integrated code METIS. METIS combines scaling laws, e.g. for global and pedestal energy or density peaking, with simplified transport and source models, while retaining fundamental nonlinear couplings, in particular in the fusion power. We have tuned METIS parameters to match JET-ILW high performance experiments, including baseline and hybrid. Based on recent observations, we assume a weaker input power scaling than IPB98 and a 10% confinement improvement due to the higher ion mass. The rapidity of METIS is utilized to scan the performance of JET DT scenarios with respect to fundamental parameters, such as plasma current, magnetic field, density or heating power. Simplified, easily parameterized waveforms are used to study the effect the ramp-up speed or heating timing. Finally, an efficient Bayesian optimizer is employed to seek the most performant scenarios in terms of the fusion power or gain.
Medical Decision-Making Among Elderly People in Long Term Care.
ERIC Educational Resources Information Center
Tymchuk, Alexander J.; And Others
1988-01-01
Presented informed consent information on high and low risk medical procedures to elderly persons in long term care facility in standard, simplified, or storybook format. Comprehension was significantly better for simplified and storybook formats. Ratings of decision-making ability approximated comprehension test results. Comprehension test…
Rosenbach, Misha; English, Joseph C
2015-07-01
The terms "palisaded neutrophilic and granulomatous dermatitis," "interstitial granulomatous dermatitis," and the subset "interstitial granulomatous drug reaction" are a source of confusion. There exists substantial overlap among the entities with few strict distinguishing features. We review the literature and highlight areas of distinction and overlap, and propose a streamlined diagnostic workup for patients presenting with this cutaneous reaction pattern. Because the systemic disease associations and requisite workup are similar, and the etiopathogenesis is poorly understood but likely similar among these entities, we propose the simplified unifying term "reactive granulomatous dermatitis" to encompass these entities. Copyright © 2015 Elsevier Inc. All rights reserved.
We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...
Hypersonic Vehicle Propulsion System Simplified Model Development
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter
2007-01-01
This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.
75 FR 48743 - Mandatory Reporting of Greenhouse Gases
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-11
...EPA is proposing to amend specific provisions in the GHG reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These proposed changes include providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources in a facility, amending data reporting requirements to provide additional clarity on when different types of GHG emissions need to be calculated and reported, clarifying terms and definitions in certain equations, and technical corrections.
75 FR 79091 - Mandatory Reporting of Greenhouse Gases
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-17
...EPA is amending specific provisions in the greenhouse gas reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These final changes include generally providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources, amending data reporting requirements to provide additional clarity on when different types of greenhouse gas emissions need to be calculated and reported, clarifying terms and definitions in certain equations and other technical corrections and amendments.
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Leib, Stewart J.
1999-01-01
An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Leib, Stewart J.
1999-01-01
An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.
Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies
NASA Astrophysics Data System (ADS)
Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura
2018-05-01
Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.
Three-dimensional calculations of rotor-airframe interaction in forward flight
NASA Technical Reports Server (NTRS)
Zori, Laith A. J.; Mathur, Sanjay R.; Rajagopalan, R. G.
1992-01-01
A method for analyzing the mutual aerodynamic interaction between a rotor and an airframe model has been developed. This technique models the rotor implicitly through the source terms of the momentum equations. A three-dimensional, incompressible, laminar, Navier-Stokes solver in cylindrical coordinates was developed for analyzing the rotor/airframe problem. The calculations are performed on a simplified model at an advance ratio of 0.1. The airframe surface pressure predictions are found to be in good agreement with wind tunnel test data. Results are presented for velocity and pressure field distributions in the wake of the rotor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paap, Scott M.; West, Todd H.; Manley, Dawn Kataoka
2013-01-01
In the current study, processes to produce either ethanol or a representative fatty acid ethyl ester (FAEE) via the fermentation of sugars liberated from lignocellulosic materials pretreated in acid or alkaline environments are analyzed in terms of economic and environmental metrics. Simplified process models are introduced and employed to estimate process performance, and Monte Carlo analyses were carried out to identify key sources of uncertainty and variability. We find that the near-term performance of processes to produce FAEE is significantly worse than that of ethanol production processes for all metrics considered, primarily due to poor fermentation yields and higher electricitymore » demands for aerobic fermentation. In the longer term, the reduced cost and energy requirements of FAEE separation processes will be at least partially offset by inherent limitations in the relevant metabolic pathways that constrain the maximum yield potential of FAEE from biomass-derived sugars.« less
Brown, J. F.; Hendy, Steve
2001-01-01
In spite of repeated efforts to explain itself to a wider audience, behavior analysis remains a largely misunderstood and isolated discipline. In this article we argue that this situation is in part due to the terms we use in our technical discussions. In particular, reinforcement and punishment, with their vernacular associations of reward and retribution, are a source of much misunderstanding. Although contemporary thinking within behavior analysis holds that reinforcement and punishment are Darwinian processes whereby behavioral variants are selected and deselected by their consequences, the continued use of the terms reinforcement and punishment to account for behavioral evolution obscures this fact. To clarify and simplify matters, we propose replacing the terms reinforcement and punishment with selection and deselection, respectively. These changes would provide a terminological meeting point with other selectionist sciences, thereby increasing the likelihood that behavior analysis will contribute to Darwinian science. PMID:22478361
NASA Astrophysics Data System (ADS)
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro
2016-07-01
The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Policy. 13.003 Section 13... CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 13.003 Policy. (a) Agencies shall use simplified...). This policy does not apply if an agency can meet its requirement using— (1) Required sources of supply...
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
NASA Astrophysics Data System (ADS)
Kies, Alexander; von Bremen, Lüder; Schyska, Bruno; Chattopadhyay, Kabitri; Lorenz, Elke; Heinemann, Detlev
2016-04-01
The transition of the European power system from fossil generation towards renewable sources is driven by different reasons like decarbonisation and sustainability. Renewable power sources like wind and solar have, due to their weather dependency, fluctuating feed-in profiles, which make their system integration a difficult task. To overcome this issue, several solutions have been investigated in the past like the optimal mix of wind and PV [1], the extension of the transmission grid or storages [2]. In this work, the optimal distribution of wind turbines and solar modules in Europe is investigated. For this purpose, feed-in data with an hourly temporal resolution and a spatial resolution of 7 km covering Europe for the renewable sources wind, photovoltaics and hydro was used. Together with historical load data and a transmission model , a simplified pan-European power power system was simulated. Under cost assumptions of [3] the levelized cost of electricity (LCOE) for this simplified system consisting of generation, consumption, transmission and backup units is calculated. With respect to the LCOE, the optimal distribution of generation facilities in Europe is derived. It is shown, that by optimal placement of renewable generation facilities the LCOE can be reduced by more than 10% compared to a meta study scenario [4] and a self-sufficient scenario (every country produces on average as much from renewable sources as it consumes). This is mainly caused by a shift of generation facilities towards highly suitable locations, reduced backup and increased transmission need. The results of the optimization will be shown and implications for the extension of renewable shares in the European power mix will be discussed. The work is part of the RESTORE 2050 project (Wuppertal Institute, Next Energy, University of Oldenburg), that is financed by the Federal Ministry of Education and Research (BMBF, Fkz. 03SFF0439A). [1] Kies, A. et al.: Kies, Alexander, et al. "Investigation of balancing effects in long term renewable energy feed-in with respect to the transmission grid." Advances in Science and Research 12.1 (2015): 91-95, doi:10.5194/asr-12-91-2015 [2] Heide, Dominik, et al. "Reduced storage and balancing needs in a fully renewable European power system with excess wind and solar power generation." Renewable Energy 36.9 (2011): 2515-2523 [3] Rodriguez, R.A.: Weather-driven power transmission in a highly renewable European electricity network, PhD Thesis, Aarhus University, November 2014 [4] Pfluger, B. et al.: Tangible ways towards climate protection in the European Union (EU long-term scenarios 2050), Fraunhofer ISI, Karlsruhe, September 2011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawloski, G A; Tompson, A F B; Carle, S F
The objectives of this report are to develop, summarize, and interpret a series of detailed unclassified simulations that forecast the nature and extent of radionuclide release and near-field migration in groundwater away from the CHESHIRE underground nuclear test at Pahute Mesa at the NTS over 1000 yrs. Collectively, these results are called the CHESHIRE Hydrologic Source Term (HST). The CHESHIRE underground nuclear test was one of 76 underground nuclear tests that were fired below or within 100 m of the water table between 1965 and 1992 in Areas 19 and 20 of the NTS. These areas now comprise the Pahutemore » Mesa Corrective Action Unit (CAU) for which a separate subregional scale flow and transport model is being developed by the UGTA Project to forecast the larger-scale migration of radionuclides from underground tests on Pahute Mesa. The current simulations are being developed, on one hand, to more fully understand the complex coupled processes involved in radionuclide migration, with a specific focus on the CHESHIRE test. While remaining unclassified, they are as site specific as possible and involve a level of modeling detail that is commensurate with the most fundamental processes, conservative assumptions, and representative data sets available. However, the simulation results are also being developed so that they may be simplified and interpreted for use as a source term boundary condition at the CHESHIRE location in the Pahute Mesa CAU model. In addition, the processes of simplification and interpretation will provide generalized insight as to how the source term behavior at other tests may be considered or otherwise represented in the Pahute Mesa CAU model.« less
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
Numerical simulations of internal wave generation by convection in water.
Lecoanet, Daniel; Le Bars, Michael; Burns, Keaton J; Vasil, Geoffrey M; Brown, Benjamin P; Quataert, Eliot; Oishi, Jeffrey S
2015-06-01
Water's density maximum at 4°C makes it well suited to study internal gravity wave excitation by convection: an increasing temperature profile is unstable to convection below 4°C, but stably stratified above 4°C. We present numerical simulations of a waterlike fluid near its density maximum in a two-dimensional domain. We successfully model the damping of waves in the simulations using linear theory, provided we do not take the weak damping limit typically used in the literature. To isolate the physical mechanism exciting internal waves, we use the spectral code dedalus to run several simplified model simulations of our more detailed simulation. We use data from the full simulation as source terms in two simplified models of internal-wave excitation by convection: bulk excitation by convective Reynolds stresses, and interface forcing via the mechanical oscillator effect. We find excellent agreement between the waves generated in the full simulation and the simplified simulation implementing the bulk excitation mechanism. The interface forcing simulations overexcite high-frequency waves because they assume the excitation is by the "impulsive" penetration of plumes, which spreads energy to high frequencies. However, we find that the real excitation is instead by the "sweeping" motion of plumes parallel to the interface. Our results imply that the bulk excitation mechanism is a very accurate heuristic for internal-wave generation by convection.
The equilibrium-diffusion limit for radiation hydrodynamics
Ferguson, J. M.; Morel, J. E.; Lowrie, R.
2017-07-27
The equilibrium-diffusion approximation (EDA) is used to describe certain radiation-hydrodynamic (RH) environments. When this is done the RH equations reduce to a simplified set of equations. The EDA can be derived by asymptotically analyzing the full set of RH equations in the equilibrium-diffusion limit. Here, we derive the EDA this way and show that it and the associated set of simplified equations are both first-order accurate with transport corrections occurring at second order. Having established the EDA’s first-order accuracy we then analyze the grey nonequilibrium-diffusion approximation and the grey Eddington approximation and show that they both preserve this first-order accuracy.more » Further, these approximations preserve the EDA’s first-order accuracy when made in either the comoving-frame (CMF) or the lab-frame (LF). And while analyzing the Eddington approximation, we found that the CMF and LF radiation-source equations are equivalent when neglecting O(β 2) terms and compared in the LF. Of course, the radiation pressures are not equivalent. It is expected that simplified physical models and numerical discretizations of the RH equations that do not preserve this first-order accuracy will not retain the correct equilibrium-diffusion solutions. As a practical example, we show that nonequilibrium-diffusion radiative-shock solutions devolve to equilibrium-diffusion solutions when the asymptotic parameter is small.« less
48 CFR 1515.302 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 15.3 and this subpart apply to the selection of source or sources in competitive negotiation acquisitions in excess of the simplified acquisition threshold, except architect-engineering services which are...
48 CFR 1515.302 - Applicability.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 15.3 and this subpart apply to the selection of source or sources in competitive negotiation acquisitions in excess of the simplified acquisition threshold, except architect-engineering services which are...
48 CFR 1515.302 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 15.3 and this subpart apply to the selection of source or sources in competitive negotiation acquisitions in excess of the simplified acquisition threshold, except architect-engineering services which are...
48 CFR 1515.302 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 15.3 and this subpart apply to the selection of source or sources in competitive negotiation acquisitions in excess of the simplified acquisition threshold, except architect-engineering services which are...
Simplified contaminant source depletion models as analogs of multiphase simulators
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-04-01
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Simplified contaminant source depletion models as analogs of multiphase simulators.
Basu, Nandita B; Fure, Adrian D; Jawitz, James W
2008-04-28
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field=0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Jonnalagadda, Siddhartha; Gonzalez, Graciela
2010-11-13
BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.
NASA Astrophysics Data System (ADS)
Bebout, B.; Bebout, L. E.; Detweiler, A. M.; Everroad, R. C.; Lee, J.; Pett-Ridge, J.; Weber, P. K.
2014-12-01
Microbial mats are famously amongst the most diverse microbial ecosystems on Earth, inhabiting some of the most inclement environments known, including hypersaline, dry, hot, cold, nutrient poor, and high UV environments. The high microbial diversity of microbial mats makes studies of microbial ecology notably difficult. To address this challenge, we have been using a combination of metagenomics, metatranscriptomics, iTags and culture-based simplified microbial mats to study biogeochemical cycling (H2 production, N2 fixation, and fermentation) in microbial mats collected from Elkhorn Slough, Monterey Bay, California. Metatranscriptomes of microbial mats incubated over a diel cycle have revealed that a number of gene systems activate only during the day in Cyanobacteria, while the remaining appear to be constitutive. The dominant cyanobacterium in the mat (Microcoleus chthonoplastes) expresses several pathways for nitrogen scavenging undocumented in cultured strains, as well as the expression of two starch storage and utilization cycles. Community composition shifts in response to long term manipulations of mats were assessed using iTags. Changes in community diversity were observed as hydrogen fluxes increased in response to a lowering of sulfate concentrations. To produce simplified microbial mats, we have isolated members of 13 of the 15 top taxa from our iTag libraries into culture. Simplified microbial mats and simple co-cultures and consortia constructed from these isolates reproduce many of the natural patterns of biogeochemical cycling in the parent natural microbial mats, but against a background of far lower overall diversity, simplifying studies of changes in gene expression (over the short term), interactions between community members, and community composition changes (over the longer term), in response to environmental forcing.
New Open-Source Version of FLORIS Released | News | NREL
New Open-Source Version of FLORIS Released New Open-Source Version of FLORIS Released January 26 , 2018 National Renewable Energy Laboratory (NREL) researchers recently released an updated open-source simplified and documented. Because of the living, open-source nature of the newly updated utility, NREL
Refraction and Shielding of Noise in Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
1996-01-01
This paper examines the shielding effect of the mean flow and refraction of sound in non-axisymmetric jets. A general three-dimensional ray-acoustic approach is applied. The methodology is independent of the exit geometry and may account for jet spreading and transverse as well as streamwise flow gradients. We assume that noise is dominated by small-scale turbulence. The source correlation terms, as described by the acoustic analogy approach, are simplified and a model is proposed that relates the source strength to 7/2 power of turbulence kinetic energy. Local characteristics of the source such as its strength, time- or length-scale, convection velocity and characteristic frequency are inferred from the mean flow considerations. Compressible Navier Stokes equations are solved with a k-e turbulence model. Numerical predictions are presented for a Mach 1.5, aspect ratio 2:1 elliptic jet. The predicted sound pressure level directivity demonstrates favorable agreement with reported data, indicating a relative quiet zone on the side of the major axis of the elliptic jet.
Measuring Phantom Recollection in the Simplified Conjoint Recognition Paradigm
ERIC Educational Resources Information Center
Stahl, Christoph; Klauer, Karl Christoph
2009-01-01
False memories are sometimes strong enough to elicit recollective experiences. This phenomenon has been termed Phantom Recollection (PR). The Conjoint Recognition (CR) paradigm has been used to empirically separate PR from other memory processes. Recently, a simplification of the CR procedure has been proposed. We herein extend the simplified CR…
77 FR 5228 - Summer Food Service Program; 2012 Reimbursement Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-02
... of rates to highlight simplified cost accounting procedures. The 2012 rates are also presented... review by the Office of Management and Budget under Executive Order 12866. Definitions The terms used in... reimbursement rates are presented as a combined set of rates to highlight simplified cost accounting procedures...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid
Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heatmore » sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.« less
Software support for SBGN maps: SBGN-ML and LibSBGN.
van Iersel, Martijn P; Villéger, Alice C; Czauderna, Tobias; Boyd, Sarah E; Bergmann, Frank T; Luna, Augustin; Demir, Emek; Sorokin, Anatoly; Dogrusoz, Ugur; Matsuoka, Yukiko; Funahashi, Akira; Aladjem, Mirit I; Mi, Huaiyu; Moodie, Stuart L; Kitano, Hiroaki; Le Novère, Nicolas; Schreiber, Falk
2012-08-01
LibSBGN is a software library for reading, writing and manipulating Systems Biology Graphical Notation (SBGN) maps stored using the recently developed SBGN-ML file format. The library (available in C++ and Java) makes it easy for developers to add SBGN support to their tools, whereas the file format facilitates the exchange of maps between compatible software applications. The library also supports validation of maps, which simplifies the task of ensuring compliance with the detailed SBGN specifications. With this effort we hope to increase the adoption of SBGN in bioinformatics tools, ultimately enabling more researchers to visualize biological knowledge in a precise and unambiguous manner. Milestone 2 was released in December 2011. Source code, example files and binaries are freely available under the terms of either the LGPL v2.1+ or Apache v2.0 open source licenses from http://libsbgn.sourceforge.net. sbgn-libsbgn@lists.sourceforge.net.
A simplified Mach number scaling law for helicopter rotor noise
NASA Technical Reports Server (NTRS)
Aravamudan, K. S.; Lee, A.; Harris, W. L.
1978-01-01
Mach number scaling laws are derived for the rotational and the high-frequency broadband noise from helicopter rotors. The rotational scaling law is obtained directly from the theory of Lowson and Ollerhead (1969) by exploiting the properties of the dominant terms in the expression for the complex Fourier coefficients of sound radiation from a point source. The scaling law for the high-frequency broadband noise is obtained by assuming that the noise sources are acoustically compact and computing the instantaneous pressure due to an element on an airfoil where vortices are shed. Experimental results on the correlation lengths for stationary airfoils are extended to rotating airfoils. On the assumption that the correlation length varies as the boundary layer displacement thickness, it is found that the Mach number scaling law contains a factor of Mach number raised to the exponent 5.8. Both scaling laws were verified by model tests.
6 Source Categories - Boilers (Proposed Action)
EPA is proposing options to simplify the Clean Air Act permitting process for certain smaller sources of air pollution commonly found in Indian country. This action would ensure that air quality in Indian country is protected.
ERIC Educational Resources Information Center
Rangachari, P. K.; Rangachari, Usha
2007-01-01
In this article, we describe a simplified approach to teach students to assess information obtained from diverse sources. Three broad categories (credibility, content, and currency; 3C) were used to evaluate information from textbooks, monographs, popular magazines, scholarly journals, and the World Wide Web. This 3C approach used in an inquiry…
Ferrie, Suzie
2006-04-01
Ethical dilemmas can be challenging for the nutrition support clinician who is accustomed to evidence-based practice. The emotional and personal nature of ethical decision making can present difficulties, and conflict can arise when people have different ethical perspectives. An understanding of ethical terms and ethical theories can be helpful in clarifying the source of this conflict. These may include prominent ethical theories such as moral relativism, utilitarianism, Kantian absolutism, Aristotle's virtue ethics and ethics of care, as well as the key ethical principles in healthcare (autonomy, beneficence, nonmaleficence, and justice). Adopting a step-by-step approach can simplify the process of resolving ethical problems.
[Simplified laparoscopic gastric bypass. Initial experience].
Hernández-Miguelena, Luis; Maldonado-Vázquez, Angélica; Cortes-Romano, Pablo; Ríos-Cruz, Daniel; Marín-Domínguez, Raúl; Castillo-González, Armando
2014-01-01
Obesity surgery includes various gastrointestinal procedures. Roux-en-Y gastric bypass is the prototype of mixed procedures being the most practiced worldwide. A similar and novel technique has been adopted by Dr. Almino Cardoso Ramos and Dr. Manoel Galvao called "simplified bypass," which has been accepted due to the greater ease and very similar results to the conventional technique. The aim of this study is to describe the results of the simplified gastric bypass for treatment of morbid obesity in our institution. We performed a descriptive, retrospective study of all patients undergoing simplified gastric bypass from January 2008 to July 2012 in the obesity clinic of a private hospital in Mexico City. A total of 90 patients diagnosed with morbid obesity underwent simplified gastric bypass. Complications occurred in 10% of patients; these were more frequent bleeding and internal hernia. Mortality in the study period was 0%. The average weight loss at 12 months was 72.7%. Simplified gastric bypass surgery is safe with good mid-term results and a loss of adequate weight in 71% of cases.
Temperature distribution of a simplified rotor due to a uniform heat source
NASA Astrophysics Data System (ADS)
Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver
2018-03-01
In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.
Multimodal modeling and validation of simplified vocal tract acoustics for sibilant /s/
NASA Astrophysics Data System (ADS)
Yoshinaga, T.; Van Hirtum, A.; Wada, S.
2017-12-01
To investigate the acoustic characteristics of sibilant /s/, multimodal theory is applied to a simplified vocal tract geometry derived from a CT scan of a single speaker for whom the sound spectrum was gathered. The vocal tract was represented by a concatenation of waveguides with rectangular cross-sections and constant width, and a sound source was placed either at the inlet of the vocal tract or downstream from the constriction representing the sibilant groove. The modeled pressure amplitude was validated experimentally using an acoustic driver or airflow supply at the vocal tract inlet. Results showed that the spectrum predicted with the source at the inlet and including higher-order modes matched the spectrum measured with the acoustic driver at the inlet. Spectra modeled with the source downstream from the constriction captured the first characteristic peak observed for the speaker at 4 kHz. By positioning the source near the upper teeth wall, the higher frequency peak observed for the speaker at 8 kHz was predicted with the inclusion of higher-order modes. At the frequencies of the characteristic peaks, nodes and antinodes of the pressure amplitude were observed in the simplified vocal tract when the source was placed downstream from the constriction. These results indicate that the multimodal approach enables to capture the amplitude and frequency of the peaks in the spectrum as well as the nodes and antinodes of the pressure distribution due to /s/ inside the vocal tract.
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
Organic thin film transistor with a simplified planar structure
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yu, Jungsheng; Zhong, Jian; Jiang, Yadong
2009-05-01
Organic thin film transistor (OTFT) with a simplified planar structure is described. The gate electrode and the source/drain electrodes of OTFT are processed in one planar structure. And these three electrodes are deposited on the glass substrate by DC sputtering technology using Cr/Ni target. Then the electrode layouts of different width length ratio are made by photolithography technology at the same time. Only one step of deposition and one step of photolithography is needed while conventional process takes at least two steps of deposition and two steps of photolithography. Metal is first prepared on the other side of glass substrate and electrode is formed by photolithography. Then source/drain electrode is prepared by deposition and photolithography on the side with the insulation layer. Compared to conventional process of OTFTs, the process in this work is simplified. After three electrodes prepared, the insulation layer is made by spin coating method. The organic material of polyimide is used as the insulation layer. A small molecular material of pentacene is evaporated on the insulation layer using vacuum deposition as the active layer. The process of OTFTs needs only three steps totally. A semi-auto probe stage is used to connect the three electrodes and the probe of the test instrument. A charge carrier mobility of 0.3 cm2 /V s, is obtained from OTFTs on glass substrates with and on/off current ratio of 105. The OTFTs with the planar structure using simplified process can simplify the device process and reduce the fabrication cost.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
Analysis of CERN computing infrastructure and monitoring data
NASA Astrophysics Data System (ADS)
Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.
2015-12-01
Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.
Measurement of erosion in helicon plasma thrusters using the VASIMR® VX-CR device
NASA Astrophysics Data System (ADS)
Del Valle Gamboa, Juan Ignacio; Castro-Nieto, Jose; Squire, Jared; Carter, Mark; Chang-Diaz, Franklin
2015-09-01
The helicon plasma source is one of the principal stages of the high-power VASIMR® electric propulsion system. The VASIMR® VX-CR experiment focuses solely on this stage, exploring the erosion and long-term operation effects of the VASIMR helicon source. We report on the design and operational parameters of the VX-CR experiment, and the development of modeling tools and characterization techniques allowing the study of erosion phenomena in helicon plasma sources in general, and stand-alone helicon plasma thrusters (HPTs) in particular. A thorough understanding of the erosion phenomena within HPTs will enable better predictions of their behavior as well as more accurate estimations of their expected lifetime. We present a simplified model of the plasma-wall interactions within HPTs based on current models of the plasma density distributions in helicon discharges. Results from this modeling tool are used to predict the erosion within the plasma-facing components of the VX-CR device. Experimental techniques to measure actual erosion, including the use of coordinate-measuring machines and microscopy, will be discussed.
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
Analysis of temperature distribution in liquid-cooled turbine blades
NASA Technical Reports Server (NTRS)
Livingood, John N B; Brown, W Byron
1952-01-01
The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.
Comparison of two trajectory based models for locating particle sources for two rural New York sites
NASA Astrophysics Data System (ADS)
Zhou, Liming; Hopke, Philip K.; Liu, Wei
Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.
First principles cable braid electromagnetic penetration model
Warne, Larry Kevin; Langston, William L.; Basilio, Lorena I.; ...
2016-01-01
The model for penetration of a wire braid is rigorously formulated. Integral formulas are developed from energy principles for both self and transfer immittances in terms of potentials for the fields. The detailed boundary value problem for the wire braid is also set up in a very efficient manner; the braid wires act as sources for the potentials in the form of a sequence of line multi-poles with unknown coefficients that are determined by means of conditions arising from the wire surface boundary conditions. Approximations are introduced to relate the local properties of the braid wires to a simplified infinitemore » periodic planar geometry. Furthermore, this is used to treat nonuniform coaxial geometries including eccentric interior coaxial arrangements and an exterior ground plane.« less
DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS
Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...
Simplifying microbial electrosynthesis reactor design.
Giddings, Cloelle G S; Nevin, Kelly P; Woodward, Trevor; Lovley, Derek R; Butler, Caitlyn S
2015-01-01
Microbial electrosynthesis, an artificial form of photosynthesis, can efficiently convert carbon dioxide into organic commodities; however, this process has only previously been demonstrated in reactors that have features likely to be a barrier to scale-up. Therefore, the possibility of simplifying reactor design by both eliminating potentiostatic control of the cathode and removing the membrane separating the anode and cathode was investigated with biofilms of Sporomusa ovata. S. ovata reduces carbon dioxide to acetate and acts as the microbial catalyst for plain graphite stick cathodes as the electron donor. In traditional 'H-cell' reactors, where the anode and cathode chambers were separated with a proton-selective membrane, the rates and columbic efficiencies of microbial electrosynthesis remained high when electron delivery at the cathode was powered with a direct current power source rather than with a potentiostat-poised cathode utilized in previous studies. A membrane-less reactor with a direct-current power source with the cathode and anode positioned to avoid oxygen exposure at the cathode, retained high rates of acetate production as well as high columbic and energetic efficiencies. The finding that microbial electrosynthesis is feasible without a membrane separating the anode from the cathode, coupled with a direct current power source supplying the energy for electron delivery, is expected to greatly simplify future reactor design and lower construction costs.
Broadening and Simplifying the First SETI Protocol
NASA Astrophysics Data System (ADS)
Michaud, M. A. G.
The Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence, known informally as the First SETI Protocol, is the primary existing international guidance on this subject. During the fifteen years since the document was issued, several people have suggested revisions or additional protocols. This article proposes a broadened and simplified text that would apply to the detection of alien technology in our solar system as well as to electromagnetic signals from more remote sources.
The influence of a wind tunnel on helicopter rotational noise: Formulation of analysis
NASA Technical Reports Server (NTRS)
Mosher, M.
1984-01-01
An analytical model is discussed that can be used to examine the effects of wind tunnel walls on helicopter rotational noise. A complete physical model of an acoustic source in a wind tunnel is described and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. The simplified physical model is then modeled as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. Details of generating a suitable Green's function and integral equation are included and the equation is discussed and also given for a two-dimensional case.
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
48 CFR 570.301 - Market survey.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Market survey. 570.301... Real Property Over the Simplified Lease Acquisition Threshold 570.301 Market survey. Conduct a market survey to identify potential sources. Use information available in GSA or from other sources to identify...
48 CFR 570.301 - Market survey.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Market survey. 570.301... Real Property Over the Simplified Lease Acquisition Threshold 570.301 Market survey. Conduct a market survey to identify potential sources. Use information available in GSA or from other sources to identify...
48 CFR 570.301 - Market survey.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Market survey. 570.301... Real Property Over the Simplified Lease Acquisition Threshold 570.301 Market survey. Conduct a market survey to identify potential sources. Use information available in GSA or from other sources to identify...
48 CFR 570.301 - Market survey.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Market survey. 570.301... Real Property Over the Simplified Lease Acquisition Threshold 570.301 Market survey. Conduct a market survey to identify potential sources. Use information available in GSA or from other sources to identify...
48 CFR 570.304 - General source selection procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... procedures. 570.304 Section 570.304 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION... Leasehold Interests in Real Property Over the Simplified Lease Acquisition Threshold 570.304 General source... disadvantaged business concerns in performance of the contract, and other factors as required by FAR 15.304 as...
Comparative toxicity assessment of particulate matter (PM) from different sources will potentially inform the understanding of regional differences in PM-induced cardiac health effects by identifying PM sources linked to highest potency components. Conventional low-throughput in...
Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques
NASA Astrophysics Data System (ADS)
Basu, N. B.; Fure, A. D.; Jawitz, J. W.
2006-12-01
Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).
simplified aerosol representations in global modeling
NASA Astrophysics Data System (ADS)
Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip
2015-04-01
The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.
Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
1984-08-01
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels ofmore » detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
Dual Telecentric Lens System For Projection Onto Tilted Toroidal Screen
NASA Technical Reports Server (NTRS)
Gold, Ronald S.; Hudyma, Russell M.
1995-01-01
System of two optical assemblies for projecting image onto tilted toroidal screen. One projection lens optimized for red and green spectral region; other for blue. Dual-channel approach offers several advantages which include: simplified color filtering, simplified chromatic aberration corrections, less complex polarizing prism arrangement, and increased throughput of blue light energy. Used in conjunction with any source of imagery, designed especially to project images formed by reflection of light from liquid-crystal light valve (LCLV).
Schemel, Laurence E.
2001-01-01
This article presents a simplified conversion to salinity units for use with specific conductance data from monitoring stations that have been normalized to a standard temperature of 25 °C and an equation for the reverse calculation. Although these previously undocumented methods have been shared with many IEP agencies over the last two decades, the sources of the equations and data are identified here so that the original literature can be accessed.
Chinese Localisation of Evergreen: An Open Source Integrated Library System
ERIC Educational Resources Information Center
Zou, Qing; Liu, Guoying
2009-01-01
Purpose: The purpose of this paper is to investigate various issues related to Chinese language localisation in Evergreen, an open source integrated library system (ILS). Design/methodology/approach: A Simplified Chinese version of Evergreen was implemented and tested and various issues such as encoding, indexing, searching, and sorting…
The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Vassiliou, M. S.
1983-01-01
Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.
Community LINE Source Model (C-LINE)
This presentation provides an introduction for the live demo and explains the purpose of C-LINE and its key features. C-LINE is a web-based model designed to inform the community user of local air quality impacts due to mobile-sources in their region of interest using a simplifie...
Code of Federal Regulations, 2011 CFR
2011-10-01
...-source justification (excluding brand name) in accordance with 8.405-6; or (ii) Task or delivery orders... Innovation Development Act of 1982 (Pub. L. 97-219); (3) The contract action is an order placed under subpart... one source is available; (6) The contract action— (i) Is for an amount not greater than the simplified...
Application of the DG-1199 methodology to the ESBWR and ABWR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalinich, Donald A.; Gauntt, Randall O.; Walton, Fotini
2010-09-01
Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Populationmore » Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.« less
NASA Astrophysics Data System (ADS)
Ringenberg, Hunter; Rogers, Dylan; Wei, Nathaniel; Krane, Michael; Wei, Timothy
2017-11-01
The objective of this study is to apply experimental data to theoretical framework of Krane (2013) in which the principal aeroacoustic source is expressed in terms of vocal fold drag, glottal jet dynamic head, and glottal exit volume flow, reconciling formal theoretical aeroacoustic descriptions of phonation with more traditional lumped-element descriptions. These quantities appear in the integral equations of motion for phonatory flow. In this way time resolved velocity field measurements can be used to compute time-resolved estimates of the relevant terms in the integral equations of motion, including phonation aeroacoustic source strength. A simplified 10x scale vocal fold model from Krane, et al. (2007) was used to examine symmetric, i.e. `healthy', oscillatory motion of the vocal folds. By using water as the working fluid, very high spatial and temporal resolution was achieved. Temporal variation of transglottal pressure was simultaneously measured with flow on the vocal fold model mid-height. Experiments were dynamically scaled to examine a range of frequencies corresponding to male and female voice. The simultaneity of the pressure and flow provides new insights into the aeroacoustics associated with vocal fold oscillations. Supported by NIH Grant No. 2R01 DC005642-11.
RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, S.L.; Miller, L.A.; Monroe, D.K.
1998-04-01
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less
ERIC Educational Resources Information Center
Herron, J. Dudley
1977-01-01
Presents short articles on: recycling disposable plastics for laboratory use; an inexpensive source of atomic and molecular models; a simplified Boyle's Law demonstration; and a lab demonstrating energy transformation. (MLH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level, the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of PRA methodologies to conduct a mechanistic source term (MST) analysis for event sequences that could result in the release ofmore » radionuclides. The MST analysis seeks to realistically model and assess the transport, retention, and release of radionuclides from the reactor to the environment. The MST methods developed during this project seek to satisfy the requirements of the Mechanistic Source Term element of the ASME/ANS Non-LWR PRA standard. The MST methodology consists of separate analysis approaches for risk-significant and non-risk significant event sequences that may result in the release of radionuclides from the reactor. For risk-significant event sequences, the methodology focuses on a detailed assessment, using mechanistic models, of radionuclide release from the fuel, transport through and release from the primary system, transport in the containment, and finally release to the environment. The analysis approach for non-risk significant event sequences examines the possibility of large radionuclide releases due to events such as re-criticality or the complete loss of radionuclide barriers. This paper provides details on the MST methodology, including the interface between the MST analysis and other elements of the PRA, and provides a simplified example MST calculation for a sodium fast reactor.« less
Governmental Writers and African Readers in Rhodesia
ERIC Educational Resources Information Center
Cripwell, Kenneth R.
1975-01-01
The simplified documents produced by the British and Rhodesian governments to explain the settlement proposals to Africans are compared in terms of syntactic complexity and lexical choice, and in terms of the audience to which the documents are addressed. (Author/RM)
48 CFR 1513.000 - Scope of part.
Code of Federal Regulations, 2010 CFR
2010-10-01
... EPA policies and procedures for the acquisition of supplies, nonpersonal services, and construction from commercial sources, the aggregate amount of which does not exceed the simplified acquisition...
The influence of wind-tunnel walls on discrete frequency noise
NASA Technical Reports Server (NTRS)
Mosher, M.
1984-01-01
This paper describes an analytical model that can be used to examine the effects of wind-tunnel walls on discrete frequency noise. First, a complete physical model of an acoustic source in a wind tunnel is described, and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. Second, the simplified physical model is formulated as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. The integral equation has been solved with a panel program on a computer. Preliminary results from a simple model problem will be shown and compared with the approximate analytic solution.
Bauer, Timothy J
2013-06-15
The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.
General analytical solutions for DC/AC circuit-network analysis
NASA Astrophysics Data System (ADS)
Rubido, Nicolás; Grebogi, Celso; Baptista, Murilo S.
2017-06-01
In this work, we present novel general analytical solutions for the currents that are developed in the edges of network-like circuits when some nodes of the network act as sources/sinks of DC or AC current. We assume that Ohm's law is valid at every edge and that charge at every node is conserved (with the exception of the source/sink nodes). The resistive, capacitive, and/or inductive properties of the lines in the circuit define a complex network structure with given impedances for each edge. Our solution for the currents at each edge is derived in terms of the eigenvalues and eigenvectors of the Laplacian matrix of the network defined from the impedances. This derivation also allows us to compute the equivalent impedance between any two nodes of the circuit and relate it to currents in a closed circuit which has a single voltage generator instead of many input/output source/sink nodes. This simplifies the treatment that could be done via Thévenin's theorem. Contrary to solving Kirchhoff's equations, our derivation allows to easily calculate the redistribution of currents that occurs when the location of sources and sinks changes within the network. Finally, we show that our solutions are identical to the ones found from Circuit Theory nodal analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warne, Larry K.; Langston, William L.; Basilio, Lorena I.
The model for penetration of a wire braid is rigorously formulated. Integral formulas are developed from energy principles and reciprocity for both self and transfer immittances in terms of potentials for the fields. The detailed boundary value problem for the wire braid is also setup in a very efficient manner; the braid wires act as sources for the potentials in the form of a sequence of line multipoles with unknown coefficients that are determined by means of conditions arising from the wire surface boundary conditions. Approximations are introduced to relate the local properties of the braid wires to a simplifiedmore » infinite periodic planar geometry. This is used in a simplified application of reciprocity to be able to treat nonuniform coaxial geometries including eccentric interior coaxial arrangements and an exterior ground plane.« less
Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Harding, D. D.
1977-01-01
The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.
Interaction between air pollution dispersion and residential heating demands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipfert, F.W.; Moskowitz, P.D.; Dungan, J.
The effect of the short-term correlation of a specific emission (sulfur dioxide) from residential space heating, with air pollution dispersion rates on the accuracy of model estimates of urban air pollution on a seasonal or annual basis is analyzed. Hourly climatological and residential emission estimates for six U.S. cities and a simplified area source-dispersion model based on a circular receptor grid are used. The effect on annual average concentration estimations is found to be slight (approximately + or - 12 percent), while the maximum hourly concentrations are shown to vary considerably more, since maximum heat demand and worst-case dispersion aremore » not coincident. Accounting for the correlations between heating demand and dispersion makes possible a differentiation in air pollution potential between coastal and interior cities.« less
NASA Astrophysics Data System (ADS)
Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira; Vanderveld, R. Ali
2012-01-01
We study the fluctuations in luminosity distances due to gravitational lensing by large scale (≳35Mpc) structures, specifically voids and sheets. We use a simplified “Swiss cheese” model consisting of a ΛCDM Friedman-Robertson-Walker background in which a number of randomly distributed nonoverlapping spherical regions are replaced by mass-compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald , which includes the effect of lensing shear. The standard deviation of this distribution is ˜0.027 magnitudes and the mean is ˜0.003 magnitudes for voids of radius 35 Mpc, sources at redshift zs=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ˜1Mpc, the standard deviation is reduced to ˜0.013 magnitudes. This standard deviation due to voids is a factor ˜3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ˜20%, and also build a simplified analytic model that reproduces our results to within ˜30%. Our model also allows us to explore the domain of validity of weak-lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ˜4%, and corrections due to shear are ˜3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible.
Prioritizing environmental justice and equality: diesel emissions in southern California.
Marshall, Julian D; Swor, Kathryn R; Nguyen, Nam P
2014-04-01
Existing environmental policies aim to reduce emissions but lack standards for addressing environmental justice. Environmental justice research documents disparities in exposure to air pollution; however, little guidance currently exists on how to make improvements or on how specific emission-reduction scenarios would improve or deteriorate environmental justice conditions. Here, we quantify how emission reductions from specific sources would change various measures of environmental equality and justice. We evaluate potential emission reductions for fine diesel particulate matter (DPM) in Southern California for five sources: on-road mobile, off-road mobile, ships, trains, and stationary. Our approach employs state-of-the-science dispersion and exposure models. We compare four environmental goals: impact, efficiency, equality, and justice. Results indicate potential trade-offs among those goals. For example, reductions in train emissions produce the greatest improvements in terms of efficiency, equality, and justice, whereas off-road mobile source reductions can have the greatest total impact. Reductions in on-road emissions produce improvements in impact, equality, and justice, whereas emission reductions from ships would widen existing population inequalities. Results are similar for complex versus simplified exposure analyses. The approach employed here could usefully be applied elsewhere to evaluate opportunities for improving environmental equality and justice in other locations.
Simplified power processing for ion-thruster subsystems
NASA Technical Reports Server (NTRS)
Wessel, F. J.; Hancock, D. J.
1983-01-01
A design for a greatly simplified power-processing unit (SPPU) for the 8-cm diameter mercury-ion-thruster subsystem is discussed. This SPPU design will provide a tenfold reduction in parts count, a decrease in system mass and cost, and an increase in system reliability compared to the existing power-processing unit (PPU) used in the Hughes/NASA Lewis Research Center Ion Auxiliary Propulsion Subsystem. The simplifications achieved in this design will greatly increase the attractiveness of ion propulsion in near-term and future spacecraft propulsion applications. A description of a typical ion-thruster subsystem is given. An overview of the thruster/power-processor interface requirements is given. Simplified thruster power processing is discussed.
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Cheng, Jianbo; Xu, Ming; Chou, Jifan
2018-04-01
The three-pattern decomposition of global atmospheric circulation (TPDGAC) partitions three-dimensional (3D) atmospheric circulation into horizontal, meridional and zonal components to study the 3D structures of global atmospheric circulation. This paper incorporates the three-pattern decomposition model (TPDM) into primitive equations of atmospheric dynamics and establishes a new set of dynamical equations of the horizontal, meridional and zonal circulations in which the operator properties are studied and energy conservation laws are preserved, as in the primitive equations. The physical significance of the newly established equations is demonstrated. Our findings reveal that the new equations are essentially the 3D vorticity equations of atmosphere and that the time evolution rules of the horizontal, meridional and zonal circulations can be described from the perspective of 3D vorticity evolution. The new set of dynamical equations includes decomposed expressions that can be used to explore the source terms of large-scale atmospheric circulation variations. A simplified model is presented to demonstrate the potential applications of the new equations for studying the dynamics of the Rossby, Hadley and Walker circulations. The model shows that the horizontal air temperature anomaly gradient (ATAG) induces changes in meridional and zonal circulations and promotes the baroclinic evolution of the horizontal circulation. The simplified model also indicates that the absolute vorticity of the horizontal circulation is not conserved, and its changes can be described by changes in the vertical vorticities of the meridional and zonal circulations. Moreover, the thermodynamic equation shows that the induced meridional and zonal circulations and advection transport by the horizontal circulation in turn cause a redistribution of the air temperature. The simplified model reveals the fundamental rules between the evolution of the air temperature and the horizontal, meridional and zonal components of global atmospheric circulation.
NASA Technical Reports Server (NTRS)
Pankine, A. A.; Ingersoll, Andrew P.
2002-01-01
We present simulations of the interannual variability of martian global dust storms (GDSs) with a simplified low-order model (LOM) of the general circulation. The simplified model allows one to conduct computationally fast long-term simulations of the martian climate system. The LOM is constructed by Galerkin projection of a 2D (zonally averaged) general circulation model (GCM) onto a truncated set of basis functions. The resulting LOM consists of 12 coupled nonlinear ordinary differential equations describing atmospheric dynamics and dust transport within the Hadley cell. The forcing of the model is described by simplified physics based on Newtonian cooling and Rayleigh friction. The atmosphere and surface are coupled: atmospheric heating depends on the dustiness of the atmosphere, and the surface dust source depends on the strength of the atmospheric winds. Parameters of the model are tuned to fit the output of the NASA AMES GCM and the fit is generally very good. Interannual variability of GDSs is possible in the IBM, but only when stochastic forcing is added to the model. The stochastic forcing could be provided by transient weather systems or some surface process such as redistribution of the sand particles in storm generating zones on the surface. The results are sensitive to the value of the saltation threshold, which hints at a possible feedback between saltation threshold and dust storm activity. According to this hypothesis, erodable material builds up its a result of a local process, whose effect is to lower the saltation threshold until a GDS occurs. The saltation threshold adjusts its value so that dust storms are barely able to occur.
Oak Ridge Spallation Neutron Source (ORSNS) target station design integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamy, T.; Booth, R.; Cleaves, J.
1996-06-01
The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
On defense strategies for system of systems using aggregated correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.
2017-04-01
We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Simplified ISCCP cloud regimes for evaluating cloudiness in CMIP5 models
NASA Astrophysics Data System (ADS)
Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin
2017-01-01
We take advantage of ISCCP simulator data available for many models that participated in CMIP5, in order to introduce a framework for comparing model cloud output with corresponding ISCCP observations based on the cloud regime (CR) concept. Simplified global CRs are employed derived from the co-variations of three variables, namely cloud optical thickness, cloud top pressure and cloud fraction ( τ, p c , CF). Following evaluation criteria established in a companion paper of ours (Jin et al. 2016), we assess model cloud simulation performance based on how well the simplified CRs are simulated in terms of similarity of centroids, global values and map correlations of relative-frequency-of-occurrence, and long-term total cloud amounts. Mirroring prior results, modeled clouds tend to be too optically thick and not as extensive as in observations. CRs with high-altitude clouds from storm activity are not as well simulated here compared to the previous study, but other regimes containing near-overcast low clouds show improvement. Models that have performed well in the companion paper against CRs defined by joint τ- p c histograms distinguish themselves again here, but improvements for previously underperforming models are also seen. Averaging across models does not yield a drastically better picture, except for cloud geographical locations. Cloud evaluation with simplified regimes seems thus more forgiving than that using histogram-based CRs while still strict enough to reveal model weaknesses.
NASA Astrophysics Data System (ADS)
Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.
2016-02-01
This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819 ± 0.004 and 0.8941 ± 0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.
NASA Astrophysics Data System (ADS)
Yoshinaga, Tsukasa; Nozaki, Kazunori; Wada, Shigeo
2018-03-01
The sound generation mechanisms of sibilant fricatives were investigated with experimental measurements and large-eddy simulations using a simplified vocal tract model. The vocal tract geometry was simplified to a three-dimensional rectangular channel, and differences in the geometries while pronouncing fricatives /s/ and /∫/ were expressed by shifting the position of the tongue and its constricted flow channel. Experimental results showed that the characteristic peak frequency of the fricatives decreased when the distance between the tongue and teeth increased. Numerical simulations revealed that the jet flow generated from the constriction impinged on the upper teeth wall and caused the main sound source upstream and downstream from the gap between the teeth. While magnitudes of the sound source decreased with increments of the frequency, amplitudes of the pressure downstream from the constriction increased at the peak frequencies of the corresponding tongue position. These results indicate that the sound pressures at the peak frequencies increased by acoustic resonance in the channel downstream from the constriction, and the different frequency characteristics between /s/ and /∫/ were produced by changing the constriction and the acoustic node positions inside the vocal tract.
Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city hasmore » been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
Conceptual Model Scenarios for the Vapor Intrusion Pathway
This report provides simplified simulation examples to illustrate graphically how subsurface conditions and building-specific characteristics determine the distribution chemical distribution and indoor air concentration relative to a source concentration.
Transfer function analysis of thermospheric perturbations
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.
1986-01-01
Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.
Transfer function analysis of thermospheric perturbations
NASA Astrophysics Data System (ADS)
Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.
1986-06-01
Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.
A simplified memory network model based on pattern formations
NASA Astrophysics Data System (ADS)
Xu, Kesheng; Zhang, Xiyun; Wang, Chaoqing; Liu, Zonghua
2014-12-01
Many experiments have evidenced the transition with different time scales from short-term memory (STM) to long-term memory (LTM) in mammalian brains, while its theoretical understanding is still under debate. To understand its underlying mechanism, it has recently been shown that it is possible to have a long-period rhythmic synchronous firing in a scale-free network, provided the existence of both the high-degree hubs and the loops formed by low-degree nodes. We here present a simplified memory network model to show that the self-sustained synchronous firing can be observed even without these two necessary conditions. This simplified network consists of two loops of coupled excitable neurons with different synaptic conductance and with one node being the sensory neuron to receive an external stimulus signal. This model can be further used to show how the diversity of firing patterns can be selectively formed by varying the signal frequency, duration of the stimulus and network topology, which corresponds to the patterns of STM and LTM with different time scales. A theoretical analysis is presented to explain the underlying mechanism of firing patterns.
Comparison between a typical and a simplified model for blast load-induced structural response
NASA Astrophysics Data System (ADS)
Abd-Elhamed, A.; Mahmoud, S.
2017-02-01
As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Simplified power processing for ion-thruster subsystems
NASA Technical Reports Server (NTRS)
Wessel, F. J.; Hancock, D. J.
1983-01-01
Compared to chemical propulsion, ion propulsion offers distinct payload-mass increases for many future low-thrust earth-orbital and deep-space missions. Despite this advantage, the high initial cost and complexity of ion-propulsion subsystems reduce their attractiveness for most present and near-term spacecraft missions. Investigations have, therefore, been conducted with the objective to attempt to simplify the power-processing unit (PPU), which is the single most complex and expensive component in the thruster subsystem. The present investigation is concerned with a program to simplify the design of the PPU employed in a 8-cm mercury-ion-thruster subsystem. In this program a dramatic simplification in the design of the PPU could be achieved, while retaining essential thruster control and subsystem operational flexibility.
Phan, Kevin; Xie, Ashleigh; Kumar, Narendra; Wong, Sophia; Medi, Caroline; La Meir, Mark; Yan, Tristan D
2015-08-01
Simplified maze procedures involving radiofrequency, cryoenergy and microwave energy sources have been increasingly utilized for surgical treatment of atrial fibrillation as an alternative to the traditional cut-and-sew approach. In the absence of direct comparisons, a Bayesian network meta-analysis is another alternative to assess the relative effect of different treatments, using indirect evidence. A Bayesian meta-analysis of indirect evidence was performed using 16 published randomized trials identified from 6 databases. Rank probability analysis was used to rank each intervention in terms of their probability of having the best outcome. Sinus rhythm prevalence beyond the 12-month follow-up was similar between the cut-and-sew, microwave and radiofrequency approaches, which were all ranked better than cryoablation (respectively, 39, 36, and 25 vs 1%). The cut-and-sew maze was ranked worst in terms of mortality outcomes compared with microwave, radiofrequency and cryoenergy (2 vs 19, 34, and 24%, respectively). The cut-and-sew maze procedure was associated with significantly lower stroke rates compared with microwave ablation [odds ratio <0.01; 95% confidence interval 0.00, 0.82], and ranked the best in terms of pacemaker requirements compared with microwave, radiofrequency and cryoenergy (81 vs 14, and 1, <0.01% respectively). Bayesian rank probability analysis shows that the cut-and-sew approach is associated with the best outcomes in terms of sinus rhythm prevalence and stroke outcomes, and remains the gold standard approach for AF treatment. Given the limitations of indirect comparison analysis, these results should be viewed with caution and not over-interpreted. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
1981-06-05
source is a fairly limited outcrop of calcareous sandstone classified as dolomite rock (Do). Class RBIb Sources: Pour basin-fill sources within the study...Paleozoic rocks consist of limestone, dolomite , and quartzite with interbedded sandstone and shale. These units are generally exposed along the northern...categories simplify discussion and presentation without altering the conclusions of the study. 2.2.1 Rock Units Dolomite rocks (Do) and carbonate rocks
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
Weisshaar, D.; Bazin, D.; Bender, P. C.; ...
2016-12-03
The gamma-ray tracking array GRETINA was coupled to the S800 magnetic spectrometer for spectroscopy with fast beams of rare isotopes at the National Superconducting Cyclotron Laboratory on the campus of Michigan State University. We describe the technical details of this powerful setup and report on GRETINA's performance achieved with source and in-beam measurements. The gamma-ray multiplicity encountered in experiments with fast beams is usually low, allowing for a simplified and efficient treatment of the data in the gamma-ray analysis in terms of Doppler reconstruction and spectral quality. Finally, the results reported in this work were obtained from GRETINA consisting ofmore » 8 detector modules hosting four high-purity germanium crystals each. Currently, GRETINA consists of 10 detector modules.« less
NASA Astrophysics Data System (ADS)
Limbach, P.; Müller, T.; Skoda, R.
2015-12-01
Commonly, for the simulation of cavitation in centrifugal pumps incompressible flow solvers with VOF kind cavitation models are applied. Since the source/sink terms of the void fraction transport equation are based on simplified bubble dynamics, empirical parameters may need to be adjusted to the particular pump operating point. In the present study a barotropic cavitation model, which is based solely on thermodynamic fluid properties and does not include any empirical parameters, is applied on a single flow channel of a pump impeller in combination with a time-explicit viscous compressible flow solver. The suction head curves (head drop) are compared to the results of an incompressible implicit standard industrial CFD tool and are predicted qualitatively correct by the barotropic model.
Correlation of recent fission product release data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kress, T.S.; Lorenz, R.A.; Nakamura, T.
For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less
Flow of GE90 Turbofan Engine Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1999-01-01
The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.
NASA Astrophysics Data System (ADS)
Ba, Yan; Liu, Haihu; Li, Qing; Kang, Qinjun; Sun, Jinju
2016-08-01
In this paper we propose a color-gradient lattice Boltzmann (LB) model for simulating two-phase flows with high density ratio and high Reynolds number. The model applies a multirelaxation-time (MRT) collision operator to enhance the stability of the simulation. A source term, which is derived by the Chapman-Enskog analysis, is added into the MRT LB equation so that the Navier-Stokes equations can be exactly recovered. Also, a form of the equilibrium density distribution function is used to simplify the source term. To validate the proposed model, steady flows of a static droplet and the layered channel flow are first simulated with density ratios up to 1000. Small values of spurious velocities and interfacial tension errors are found in the static droplet test, and improved profiles of velocity are obtained by the present model in simulating channel flows. Then, two cases of unsteady flows, Rayleigh-Taylor instability and droplet splashing on a thin film, are simulated. In the former case, the density ratio of 3 and Reynolds numbers of 256 and 2048 are considered. The interface shapes and spike and bubble positions are in good agreement with the results of previous studies. In the latter case, the droplet spreading radius is found to obey the power law proposed in previous studies for the density ratio of 100 and Reynolds number up to 500.
Murphy, D A; O'Keefe, Z H; Kaufman, A H
1999-10-01
A simplified version of the prototype HIV vaccine material was developed through (a) reducing reading grade level, (b) restructuring of the organization and categorization of the material, (c) adding pictures designed to emphasize key concepts, and (d) obtaining feedback on the simplified version through focus groups with the target population. Low-income women at risk for HIV (N = 141) recruited from a primary care clinic were randomly assigned to be presented the standard or the simplified version. There were no significant differences between the groups in terms of education or Vocabulary, Block Design, and Passage Comprehension scores. Women who received the simplified version had significantly higher comprehension scores immediately following presentation of the material than did women who received the standard version and were also significantly more likely to recall study benefits and risks. These findings were maintained at 3-month follow-up. Implications for informed consent are discussed.
An Evolving Worldview: Making Open Source Easy
NASA Technical Reports Server (NTRS)
Rice, Zachary
2017-01-01
NASA Worldview is an interactive interface for browsing full-resolution, global satellite imagery. Worldview supports an open data policy so that academia, private industries and the general public can use NASA's satellite data to address Earth science related issues. Worldview was open sourced in 2014. By shifting to an open source approach, the Worldview application has evolved to better serve end-users. Project developers are able to have discussions with end-users and community developers to understand issues and develop new features. New developers are able to track upcoming features, collaborate on them and make their own contributions. Getting new developers to contribute to the project has been one of the most important and difficult aspects of open sourcing Worldview. A focus has been made on making the installation of Worldview simple to reduce the initial learning curve and make contributing code easy. One way we have addressed this is through a simplified setup process. Our setup documentation includes a set of prerequisites and a set of straight forward commands to clone, configure, install and run. This presentation will emphasis our focus to simplify and standardize Worldview's open source code so more people are able to contribute. The more people who contribute, the better the application will become over time.
United States Navy Contracting Officer Warranting Process
2011-03-01
by 30% or more of the respondents: Contract Law , Cost Analysis, Market Research, Contract Source Selection, Simplified Acquisition Procedures, and...that the majority of AOs found the following course at least somewhat important: Contract Law , Cost Analysis, Market Research, Contract 52 Source...the budget and appropriation cycle 4. Ethics and conduct standards 5. Basic contract laws and regulations 6. Socio-economic requirements in
Development of the ICD-10 simplified version and field test.
Paoin, Wansa; Yuenyongsuwan, Maliwan; Yokobori, Yukiko; Endo, Hiroyoshi; Kim, Sukil
2018-05-01
The International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) has been used in various Asia-Pacific countries for more than 20 years. Although ICD-10 is a powerful tool, clinical coding processes are complex; therefore, many developing countries have not been able to implement ICD-10-based health statistics (WHO-FIC APN, 2007). This study aimed to simplify ICD-10 clinical coding processes, to modify index terms to facilitate computer searching and to provide a simplified version of ICD-10 for use in developing countries. The World Health Organization Family of International Classifications Asia-Pacific Network (APN) developed a simplified version of the ICD-10 and conducted field testing in Cambodia during February and March 2016. Ten hospitals were selected to participate. Each hospital sent a team to join a training workshop before using the ICD-10 simplified version to code 100 cases. All hospitals subsequently sent their coded records to the researchers. Overall, there were 1038 coded records with a total of 1099 ICD clinical codes assigned. The average accuracy rate was calculated as 80.71% (66.67-93.41%). Three types of clinical coding errors were found. These related to errors relating to the coder (14.56%), those resulting from the physician documentation (1.27%) and those considered system errors (3.46%). The field trial results demonstrated that the APN ICD-10 simplified version is feasible for implementation as an effective tool to implement ICD-10 clinical coding for hospitals. Developing countries may consider adopting the APN ICD-10 simplified version for ICD-10 code assignment in hospitals and health care centres. The simplified version can be viewed as an introductory tool which leads to the implementation of the full ICD-10 and may support subsequent ICD-11 adoption.
Modeling the nonlinear hysteretic response in DAE experiments of Berea sandstone: A case-study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecorari, Claudio, E-mail: claudio.pecorari@hotmail.com
2015-03-31
Dynamic acousto-elasticity (DAE) allows probing the instantaneous state of a material while the latter slowly and periodically is changed by an external, dynamic source. In DAE investigations of geo-materials, hysteresis of the material's modulus defect displays intriguing features which have not yet been interpreted in terms of any specific mechanism occurring at atomic or mesoscale. Here, experimental results on dry Berea sandstone, which is the rock type best investigated by means of a DAE technique, are analyzed in terms of three rheological models providing simplified representations of mechanisms involving dislocations interacting with point defects which are distributed along the dislocations'more » core or glide planes, and microcracks with finite stiffness in compression. Constitutive relations linking macroscopic strain and stress are derived. From the latter, the modulus defect associated to each mechanism is recovered. These models are employed to construct a composite one which is capable of reproducing several of the main features observed in the experimental data. The limitations of the present approach and, possibly, of the current implementation of DAE are discussed.« less
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.
Mørk, Søren; Holmes, Ian
2012-03-01
Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.
Large Eddy Simulation of Bubbly Ship Wakes
2005-08-01
as, [Cm +BI(p)+ DE (u)+D,(u,)] (2.28) aRm, =-[E,+FE )(p) (229O•., L pe•,z+_tpjj.( F.(]-](2.29) where Ci and EP represent the convective terms, Bi is the...discrete operator for the pressure gradient term, DE and D, (FE and FI) are discrete operators for the explicitly treated off diagonal terms and the...Bashforth scheme is employed for all the other terms. The off diagonal viscous terms ( DE ) are treated explicitly in order to simplify the LHS matrix of the
Simplified analysis and optimization of space base and space shuttle heat rejection systems
NASA Technical Reports Server (NTRS)
Wulff, W.
1972-01-01
A simplified radiator system analysis was performed to predict steady state radiator system performance. The system performance was found to be describable in terms of five non-dimensional system parameters. The governing differential equations are integrated numerically to yield the enthalpy rejection for the coolant fluid. The simplified analysis was extended to produce the derivatives of the coolant exit temperature with respect to the governing system parameters. A procedure was developed to find the optimum set of system parameters which yields the lowest possible coolant exit temperature for either a given projected area or a given total mass. The process can be inverted to yield either the minimum area or the minimum mass, together with the optimum geometry, for a specified heat rejection rate.
Redesigned Electron-Beam Furnace Boosts Productivity
NASA Technical Reports Server (NTRS)
Williams, Gary A.
1995-01-01
Redesigned electron-beam furnace features carousel of greater capacity so more experiments conducted per loading, and time spent on reloading and vacuum pump-down reduced. Common mounting plate for electron source and carousel simplifies installation and reduces vibration.
Simplified combustion noise theory yielding a prediction of fluctuating pressure level
NASA Technical Reports Server (NTRS)
Huff, R. G.
1984-01-01
The first order equations for the conservation of mass and momentum in differential form are combined for an ideal gas to yield a single second order partial differential equation in one dimension and time. Small perturbation analysis is applied. A Fourier transformation is performed that results in a second order, constant coefficient, nonhomogeneous equation. The driving function is taken to be the source of combustion noise. A simplified model describing the energy addition via the combustion process gives the required source information for substitution in the driving function. This enables the particular integral solution of the nonhomogeneous equation to be found. This solution multiplied by the acoustic pressure efficiency predicts the acoustic pressure spectrum measured in turbine engine combustors. The prediction was compared with the overall sound pressure levels measured in a CF6-50 turbofan engine combustor and found to be in excellent agreement.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
RANS modeling of scalar dispersion from localized sources within a simplified urban-area model
NASA Astrophysics Data System (ADS)
Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca
2011-11-01
The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.
Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison
2017-11-01
Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.
Indirect current control with separate IZ drop compensation for voltage source converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanetkar, V.R.; Dawande, M.S.; Dubey, G.K.
1995-12-31
Indirect Current Control (ICC) of boost type Voltage Source Converters (VSCs) using separate compensation of line IZ voltage drop is presented. A separate bi-directional VSC is used to produce the compensation voltage. This simplifies the ICC regulator scheme as the power flow is controlled through single modulation index. Experimental verification is provided for bi-directional control of the power flow.
Chandan, Sanjay; Halli, Rajshekhar; Joshi, Samir; Chhabaria, Gaurav; Setiya, Sneha
2013-11-01
Management of pediatric mandibular fractures presents a unique challenge to surgeons in terms of its numerous variations compared to adults. Both conservative and open methods have been advocated with their obvious limitations and complications. However, conservative modalities may not be possible in grossly displaced fractures, which necessitate the open method of fixation. We present a novel and simplified technique of transosseous fixation of displaced pediatric mandibular fractures with polyglactin resorbable suture, which provides adequate stability without any interference with tooth buds and which is easy to master.
Glossary of Seed Germination Terms for Tree Seed Workers
F.T. Bonner
1984-01-01
This glossary is designed for scientists and technicians concerned with testing or research with forest tree seeds. Definitions have been simplified as much as possible without sacrificing their technical meanings.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
The Internet: A Primer. House Research Information Brief.
ERIC Educational Resources Information Center
Dalton, Pat
This paper, one in a series of information briefs related to the Internet and taxation, contains a simplified overview of the Internet and a glossary of terms that are commonly encountered in discussing the Internet. Terms that are included in this glossary are italicized when they are used elsewhere in the paper. A series of questions are asked…
NASA Technical Reports Server (NTRS)
Barth, Timothy; Saini, Subhash (Technical Monitor)
1999-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.
A Simplified Theory of Coupled Oscillator Array Phase Control
NASA Technical Reports Server (NTRS)
Pogorzelski, R. J.; York, R. A.
1997-01-01
Linear and planar arrays of coupled oscillators have been proposed as means of achieving high power rf sources through coherent spatial power combining. In such - applications, a uniform phase distribution over the aperture is desired. However, it has been shown that by detuning some of the oscillators away from the oscillation frequency of the ensemble of oscillators, one may achieve other useful aperture phase distributions. Notable among these are linear phase distributions resulting in steering of the output rf beam away from the broadside direction. The theory describing the operation of such arrays of coupled oscillators is quite complicated since the phenomena involved are inherently nonlinear. This has made it difficult to develop an intuitive understanding of the impact of oscillator tuning on phase control and has thus impeded practical application. In this work a simpl!fied theory is developed which facilitates intuitive understanding by establishing an analog of the phase control problem in terms of electrostatics.
Analysis of energy recovery potential using innovative technologies of waste gasification.
Lombardi, Lidia; Carnevale, Ennio; Corti, Andrea
2012-04-01
In this paper, two alternative thermo-chemical processes for waste treatment were analysed: high temperature gasification and gasification associated to plasma process. The two processes were analysed from the thermodynamic point of view, trying to reconstruct two simplified models, using appropriate simulation tools and some support data from existing/planned plants, able to predict the energy recovery performances by process application. In order to carry out a comparative analysis, the same waste stream input was considered as input to the two models and the generated results were compared. The performances were compared with those that can be obtained from conventional combustion with energy recovery process by means of steam turbine cycle. Results are reported in terms of energy recovery performance indicators as overall energy efficiency, specific energy production per unit of mass of entering waste, primary energy source savings, specific carbon dioxide production. Copyright © 2011 Elsevier Ltd. All rights reserved.
Four-Wave-Mixing Oscillations in a simplified Boltzmannian semiconductor model with LO-phonons
NASA Astrophysics Data System (ADS)
Tamborenea, P. I.; Bányai, L.; Haug, H.
1996-03-01
The recently discovered(L. Bányai, D. B. Tran Thoai, E. Reitsamer, H. Haug, D. Steinbach, M. U. Wehner, M. Wegener, T. Marschner and W. Stolz, Phys. Rev. Lett. 75), 2188 (1995). oscillations of the integrated four-wave-mixing signal in semiconductors due to electron-LO-phonon scattering are studied within a simplified Boltzmann-type model. Although several aspects of the experimental results require a description within the framework of non-Markovian quantum-kinetic theory, our simplified Boltzmannian model is well suited to analyze the origin of the observed novel oscillations of frequency (1+m_e/m_h) hbarω_LO. To this end, we developed a third-order, analytic solution of the semiconductor Bloch equations (SBE) with Boltzmann-type, LO-phonon collision terms. Results of this theory along with numerical solutions of the SBE will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Transformations, Inc., has extensive experience building high-performance homes - production and custom - in a variety of Massachusetts locations and uses mini-split heat pumps (MSHPs) for space conditioning in most of its homes. The use of MSHPs for simplified space-conditioning distribution provides significant first-cost savings, which offsets the increased investment in the building enclosure. In this project, the U.S. Department of Energy Building America team Building Science Corporation evaluated the long-term performance of MSHPs in 8 homes during a period of 3 years. The work examined electrical use of MSHPs, distributions of interior temperatures and humidity when using simplified (two-point)more » heating systems in high-performance housing, and the impact of open-door/closed-door status on temperature distributions.« less
Long-Term Monitoring of Mini-Split Ductless Heat Pumps in the Northeast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, K.; Loomis, H.
Transformations, Inc. has extensive experience building their high performance housing at a variety of Massachusetts locations, in both a production and custom home setting. The majority of their construction uses mini-split heat pumps (MSHPs) for space conditioning. This research covered the long-term performance of MSHPs in Zone 5A; it is the culmination of up to 3 years' worth of monitoring in a set of eight houses. This research examined electricity use of MSHPs, distributions of interior temperatures and humidity when using simplified (two-point) heating systems in high-performance housing, and the impact of open-door/closed-door status on temperature distributions. The use ofmore » simplified space conditioning distribution (through use of MSHPs) provides significant first cost savings, which are used to offset the increased investment in the building enclosure.« less
Modified off-midline closure of pilonidal sinus disease.
Saber, Aly
2014-05-01
Numerous surgical procedures have been described for pilonidal sinus disease, but treatment failure and disease recurrence are frequent. Conventional off-midline flap closures have relatively favorable surgical outcomes, but relatively unfavorable cosmetic outcomes. The author reported outcomes of a new simplified off-midline technique for closure of the defect after complete excision of the sinus tracts. Two hundred patients of both sexes were enrolled for modified D-shaped excisions were used to include all sinuses and their ramifications, with a simplified procedure to close the defect. The overall wound infection rate was 12%, (12.2% for males and 11.1% for females). Wound disruption was necessitating laying the whole wound open and management as open technique. The overall wound disruption rate was 6%, (6.1% for males and 5.5% for females) and the overall recurrence rate was 7%. Our simplified off-midline closure without flap appeared to be comparable to conventional off-midline closure with flap, in terms of wound infection, wound dehiscence, and recurrence. Advantages of the simplified procedure include potentially reduced surgery complexity, reduced surgery time, and improved cosmetic outcome.
NASA Technical Reports Server (NTRS)
Lakeotes, Christopher D.
1990-01-01
DEVECT (CYBER-205 Devectorizer) is CYBER-205 FORTRAN source-language-preprocessor computer program reducing vector statements to standard FORTRAN. In addition, DEVECT has many other standard and optional features simplifying conversion of vector-processor programs for CYBER 200 to other computers. Written in FORTRAN IV.
Simplified aerosol modeling for variational data assimilation
NASA Astrophysics Data System (ADS)
Huneeus, N.; Boucher, O.; Chevallier, F.
2009-11-01
We have developed a simplified aerosol model together with its tangent linear and adjoint versions for the ultimate aim of optimizing global aerosol and aerosol precursor emission using variational data assimilation. The model was derived from the general circulation model LMDz; it groups together the 24 aerosol species simulated in LMDz into 4 species, namely gaseous precursors, fine mode aerosols, coarse mode desert dust and coarse mode sea salt. The emissions have been kept as in the original model. Modifications, however, were introduced in the computation of aerosol optical depth and in the processes of sedimentation, dry and wet deposition and sulphur chemistry to ensure consistency with the new set of species and their composition. The simplified model successfully manages to reproduce the main features of the aerosol distribution in LMDz. The largest differences in aerosol load are observed for fine mode aerosols and gaseous precursors. Differences between the original and simplified models are mainly associated to the new deposition and sedimentation velocities consistent with the definition of species in the simplified model and the simplification of the sulphur chemistry. Furthermore, simulated aerosol optical depth remains within the variability of monthly AERONET observations for all aerosol types and all sites throughout most of the year. Largest differences are observed over sites with strong desert dust influence. In terms of the daily aerosol variability, the model is less able to reproduce the observed variability from the AERONET data with larger discrepancies in stations affected by industrial aerosols. The simplified model however, closely follows the daily simulation from LMDz. Sensitivity analyses with the tangent linear version show that the simplified sulphur chemistry is the dominant process responsible for the strong non-linearity of the model.
Supersonic Localized Excitations Mediate Microscopic Dynamic Failure
NASA Astrophysics Data System (ADS)
Ghaffari, H. O.; Griffith, W. A.; Pec, M.
2017-12-01
A moving rupture front activates a fault patch by increasing stress above a threshold strength level. Subsequent failure yields fast slip which releases stored energy in the rock. A fraction of the released energy is radiated as seismic waves carrying information about the earthquake source. While this simplified model is widely accepted, the detailed evolution from the onset of dynamic failure to eventual re-equilibration is still poorly understood. To study dynamic failure of brittle solids we indented thin sheets of single mineral crystals and recorded the emitted ultrasound signals (high frequency analogues to seismic waves) using an array of 8 to 16 ultrasound probes. The simple geometry of the experiments allows us to unravel details of dynamic stress history of the laboratory earthquake sources. A universal pattern of failure is observed. First, stress increases over a short time period (1 - 2 µs), followed by rapid weakening (≈ 15 µs). Rapid weakening is followed by two distinct relaxation phases: a temporary quasi-steady state phase (10 µs) followed by a long-term relaxation phase (> 50 µs). We demonstrate that the dynamic stress history during failure is governed by formation and interaction of local non-dispersive excitations, or solitons. The formation and annihilation of solitons mediates the microscopic fast weakening phase, during which extreme acceleration and collision of solitons lead to non-Newtonian behavior and Lorentz contraction, i.e. shortening of solitons' characteristic length. Interestingly, a soliton can propagate as fast as 37 km/s, much faster than the p-wave velocity, implying that a fraction of the energy transmits through soliton excitations. The quasi-steady state phase delays the long-term ageing of the damaged crystal, implying a potentially weaker material. Our results open new horizons for understanding the complexity of earthquake sources, and, more generally, non-equilibrium relaxation of many body systems.
NASA Astrophysics Data System (ADS)
Kelbert, A.; Egbert, G. D.; Sun, J.
2011-12-01
Poleward of 45-50 degrees (geomagnetic) observatory data are influenced significantly by auroral ionospheric current systems, invalidating the simplifying zonal dipole source assumption traditionally used for long period (T > 2 days) geomagnetic induction studies. Previous efforts to use these data to obtain the global electrical conductivity distribution in Earth's mantle have omitted high-latitude sites (further thinning an already sparse dataset) and/or corrected the affected transfer functions using a highly simplified model of auroral source currents. Although these strategies are partly effective, there remain clear suggestions of source contamination in most recent 3D inverse solutions - specifically, bands of conductive features are found near auroral latitudes. We report on a new approach to this problem, based on adjusting both external field structure and 3D Earth conductivity to fit observatory data. As an initial step towards full joint inversion we are using a two step procedure. In the first stage, we adopt a simplified conductivity model, with a thin-sheet of variable conductance (to represent the oceans) overlying a 1D Earth, to invert observed magnetic fields for external source spatial structure. Input data for this inversion are obtained from frequency domain principal components (PC) analysis of geomagnetic observatory hourly mean values. To make this (essentially linear) inverse problem well-posed we regularize using covariances for source field structure that are consistent with well-established properties of auroral ionospheric (and magnetospheric) current systems, and basic physics of the EM fields. In the second stage, we use a 3D finite difference inversion code, with source fields estimated from the first stage, to further fit the observatory PC modes. We incorporate higher latitude data into the inversion, and maximize the amount of available information by directly inverting the magnetic field components of the PC modes, instead of transfer functions such as C-responses used previously. Recent improvements in accuracy and speed of the forward and inverse finite difference codes (a secondary field formulation and parallelization over frequencies) allow us to use finer computational grid for inversion, and thus to model finer scale features, making full use of the expanded data set. Overall, our approach presents an improvement over earlier observatory data interpretation techniques, making better use of the available data, and allowing to explore the trade-offs between complications in source structure, and heterogeneities in mantle conductivity. We will also report on progress towards applying the same approach to simultaneous source/conductivity inversion of shorter period observatory data, focusing especially on the daily variation band.
Table of Comparison of 2012 CDR v 2006 IUR Definitions
In forming 40 CFR 711, EPA sought to simplify the definition section and remove unnecessary duplication of regulatory terms. This table is a comparison of 2006 IUR Definitions and 2012 CDR Definitions.
Spectra of turbulently advected scalars that have small Schmidt number
NASA Astrophysics Data System (ADS)
Hill, Reginald J.
2017-09-01
Exact statistical equations are derived for turbulent advection of a passive scalar having diffusivity much larger than the kinematic viscosity, i.e., small Schmidt number. The equations contain all terms needed for precise direct numerical simulation (DNS) quantification. In the appropriate limit, the equations reduce to the classical theory for which the scalar spectrum is proportional to the energy spectrum multiplied by k-4, which, in turn, results in the inertial-diffusive range power law, k-17 /3. The classical theory was derived for the case of isotropic velocity and scalar fields. The exact equations are simplified for less restrictive cases: (1) locally isotropic scalar fluctuations at dissipation scales with no restriction on symmetry of the velocity field, (2) isotropic velocity field with averaging over all wave-vector directions with no restriction on the symmetry of the scalar, motivated by that average being used for DNS, and (3) isotropic velocity field with axisymmetric scalar fluctuations, motivated by the mean-scalar-gradient-source case. The equations are applied to recently published DNSs of passive scalars for the cases of a freely decaying scalar and a mean-scalar-gradient source. New terms in the exact equations are estimated for those cases and are found to be significant; those terms cause the deviations from the classical theory found by the DNS studies. A new formula for the mean-scalar-gradient case explains the variation of the scalar spectra for the DNS of the smallest Schmidt-number cases. Expansion in Legendre polynomials reveals the effect of axisymmetry. Inertial-diffusive-range formulas for both the zero- and second-order Legendre contributions are given. Exact statistical equations reveal what must be quantified using DNS to determine what causes deviations from asymptotic relationships.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ba, Yan; Liu, Haihu; Li, Qing
2016-08-15
In this paper, we propose a color-gradient lattice Boltzmann (LB) model for simulating two-phase flows with high density ratio and high Reynolds number. The model applies a multi-relaxation-time (MRT) collision operator to enhance the stability of the simulation. A source term, which is derived by the Chapman-Enskog analysis, is added into the MRT LB equation so that the Navier-Stokes equations can be exactly recovered. Also, a new form of the equilibrium density distribution function is used to simplify the source term. To validate the proposed model, steady flows of a static droplet and the layered channel flow are first simulatedmore » with density ratios up to 1000. Small values of spurious velocities and interfacial tension errors are found in the static droplet test, and improved profiles of velocity are obtained by the present model in simulating channel flows. Then, two cases of unsteady flows, Rayleigh-Taylor instability and droplet splashing on a thin film, are simulated. In the former case, the density ratio of 3 and Reynolds numbers of 256 and 2048 are considered. The interface shapes and spike/bubble positions are in good agreement with the results of previous studies. In the latter case, the droplet spreading radius is found to obey the power law proposed in previous studies for the density ratio of 100 and Reynolds number up to 500.« less
Three Microstructural Exercises for Students.
ERIC Educational Resources Information Center
Means, Winthrop D.
1986-01-01
Describes laboratory exercises which demonstrate a new simplified technique for deforming thin samples of crystalline materials on the stage of a petrographic microscope. Discusses how this process allows students to see the development of microstructures resulting from cracking, slipping, thinning, and recrystallization. References and sources of…
Lovelace simplifies, saves big with single-source imaging equipment service contract.
1997-11-01
Lovelace Health System traded in its disorganized mess of service contracts for imaging and cardiology equipment for one umbrella contract--and is now saving more than $200,000 a year as a result. Find out how to achieve similar savings.
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2012-01-01
This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.
Simplified model of mean double step (MDS) in human body movement
NASA Astrophysics Data System (ADS)
Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando
In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.
Sousa, Marcelo R; Jones, Jon P; Frind, Emil O; Rudolph, David L
2013-01-01
In contaminant travel from ground surface to groundwater receptors, the time taken in travelling through the unsaturated zone is known as the unsaturated zone time lag. Depending on the situation, this time lag may or may not be significant within the context of the overall problem. A method is presented for assessing the importance of the unsaturated zone in the travel time from source to receptor in terms of estimates of both the absolute and the relative advective times. A choice of different techniques for both unsaturated and saturated travel time estimation is provided. This method may be useful for practitioners to decide whether to incorporate unsaturated processes in conceptual and numerical models and can also be used to roughly estimate the total travel time between points near ground surface and a groundwater receptor. This method was applied to a field site located in a glacial aquifer system in Ontario, Canada. Advective travel times were estimated using techniques with different levels of sophistication. The application of the proposed method indicates that the time lag in the unsaturated zone is significant at this field site and should be taken into account. For this case, sophisticated and simplified techniques lead to similar assessments when the same knowledge of the hydraulic conductivity field is assumed. When there is significant uncertainty regarding the hydraulic conductivity, simplified calculations did not lead to a conclusive decision. Copyright © 2012 Elsevier B.V. All rights reserved.
1991-06-01
algorithms (for the analysis of mechanisms), traditional numerical simulation methods, and algorithms that examine the (continued on back) 14. SUBJECT TERMS ...7540-01-280.S500 )doo’c -O• 98 (; : 89) 2YB Block 13 continued: simulation results and reinterpret them in qualitative terms . Moreover...simulation results and reinterpret them in qualitative terms . Moreover, the Workbench can use symbolic procedures to help guide or simplify the task
Acoustic reciprocity: An extension to spherical harmonics domain.
Samarasinghe, Prasanga; Abhayapala, Thushara D; Kellermann, Walter
2017-10-01
Acoustic reciprocity is a fundamental property of acoustic wavefields that is commonly used to simplify the measurement process of many practical applications. Traditionally, the reciprocity theorem is defined between a monopole point source and a point receiver. Intuitively, it must apply to more complex transducers than monopoles. In this paper, the authors formulate the acoustic reciprocity theory in the spherical harmonics domain for directional sources and directional receivers with higher order directivity patterns.
Empirical Modeling Of Single-Event Upset
NASA Technical Reports Server (NTRS)
Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.
1988-01-01
Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.
Culture optimization for the emergent zooplanktonic model organism Oikopleura dioica
Bouquet, Jean-Marie; Spriet, Endy; Troedsson, Christofer; Otterå, Helen; Chourrout, Daniel; Thompson, Eric M.
2009-01-01
The pan-global marine appendicularian, Oikopleura dioica, shows considerable promise as a candidate model organism for cross-disciplinary research ranging from chordate genetics and evolution to molecular ecology research. This urochordate, has a simplified anatomical organization, remains transparent throughout an exceptionally short life cycle of less than 1 week and exhibits high fecundity. At 70 Mb, the compact, sequenced genome ranks among the smallest known metazoan genomes, with both gene regulatory and intronic regions highly reduced in size. The organism occupies an important trophic role in marine ecosystems and is a significant contributor to global vertical carbon flux. Among the short list of bona fide biological model organisms, all share the property that they are amenable to long-term maintenance in laboratory cultures. Here, we tested diet regimes, spawn densities and dilutions and seawater treatment, leading to optimization of a detailed culture protocol that permits sustainable long-term maintenance of O. dioica, allowing continuous, uninterrupted production of source material for experimentation. The culture protocol can be quickly adapted in both coastal and inland laboratories and should promote rapid development of the many original research perspectives the animal offers. PMID:19461862
The role of nuclear energy in mitigating greenhouse warming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krakowski, R.A.
1997-12-31
A behavioral, top-down, forced-equilibrium market model of long-term ({approximately} 2,100) global energy-economics interactions has been modified with a bottom-up nuclear energy model and used to construct consistent scenarios describing future impacts of civil nuclear materials flows in an expanding, multi-regional (13) world economy. The relative measures and tradeoffs between economic (GNP, tax impacts, productivity, etc.), environmental (greenhouse gas accumulations, waste accumulation, proliferation risk), and energy (resources, energy mixes, supply-side versus demand-side attributes) interactions that emerge from these analyses are focused herein on advancing understanding of the role that nuclear energy (and other non-carbon energy sources) might play in mitigating greenhousemore » warming. Two ostensibly opposing scenario drivers are investigated: (a) demand-side improvements in (non-price-induced) autonomous energy efficiency improvements; and (b) supply-side carbon-tax inducements to shift energy mixes towards reduced- or non-carbon forms. In terms of stemming greenhouse warming for minimal cost of greenhouse-gas abatement, and with the limitations of the simplified taxing schedule used, a symbiotic combination of these two approaches may offer advantages not found if each is applied separately.« less
A method for obtaining a statistically stationary turbulent free shear flow
NASA Technical Reports Server (NTRS)
Timson, Stephen F.; Lele, S. K.; Moser, R. D.
1994-01-01
The long-term goal of the current research is the study of Large-Eddy Simulation (LES) as a tool for aeroacoustics. New algorithms and developments in computer hardware are making possible a new generation of tools for aeroacoustic predictions, which rely on the physics of the flow rather than empirical knowledge. LES, in conjunction with an acoustic analogy, holds the promise of predicting the statistics of noise radiated to the far-field of a turbulent flow. LES's predictive ability will be tested through extensive comparison of acoustic predictions based on a Direct Numerical Simulation (DNS) and LES of the same flow, as well as a priori testing of DNS results. The method presented here is aimed at allowing simulation of a turbulent flow field that is both simple and amenable to acoustic predictions. A free shear flow is homogeneous in both the streamwise and spanwise directions and which is statistically stationary will be simulated using equations based on the Navier-Stokes equations with a small number of added terms. Studying a free shear flow eliminates the need to consider flow-surface interactions as an acoustic source. The homogeneous directions and the flow's statistically stationary nature greatly simplify the application of an acoustic analogy.
Radiation source with shaped emission
Kubiak, Glenn D.; Sweatt, William C.
2003-05-13
Employing a source of radiation, such as an electric discharge source, that is equipped with a capillary region configured into some predetermined shape, such as an arc or slit, can significantly improve the amount of flux delivered to the lithographic wafers while maintaining high efficiency. The source is particularly suited for photolithography systems that employs a ringfield camera. The invention permits the condenser which delivers critical illumination to the reticle to be simplified from five or more reflective elements to a total of three or four reflective elements thereby increasing condenser efficiency. It maximizes the flux delivered and maintains a high coupling efficiency. This architecture couples EUV radiation from the discharge source into a ring field lithography camera.
Comparison of Online Dementia Information in Chinese and in English Languages
Tsiang, John T
2017-01-01
Introduction There is a deficit of avenues for obtaining dementia information in the Asian American community. This study aims to compare the content and quality differences between websites providing information on dementia as found by a Google search conducted both in simplified Chinese characters and in English. Methods A Google search was performed for the phrase “dementia” in simplified Chinese characters and in English. The resultant websites were categorized by whether they were commercial in nature, the type of website, and whether the website provided an explanation of dementia signs and symptoms. The quality of the websites was assessed via readability and the Health on the Net Code of Conduct (HONcode). Chi-squared analyses were performed to establish differences between the English and simplified Chinese results. Results The simplified Chinese search websites were more likely to be commercial (p=0.045) and were more likely to not meet HONcode standards (p=0.008). No statistical significance was observed between the types of websites (p=0.127), the prevalence of signs and symptom explanations (p=0.073), and the readability of the website (p=0.151). Conclusion The quality of websites obtained from the simplified Chinese character Google search was lower than those obtained from searches using the English language. Given the limited sources of language and culturally appropriate information on dementia, improvement of Internet resources may help to improve health outcomes of dementia patients in the Asian American population. PMID:29308336
A Simplified Decision Support Approach for Evaluating Wetlands Ecosystem Services NABS11
State-level managers and environmental advocates often must justify their restoration actions in terms of tangible beneficial outcomes. Wetlands functional assessment tools (e.g, Wetland Evaluation Technique (WET), Habitat Evaluation Procedures (HEP), Hydrogeomorphic Method (HGM)...
History of Science in the Physics Curriculum: A Directed Content Analysis of Historical Sources
NASA Astrophysics Data System (ADS)
Seker, Hayati; Guney, Burcu G.
2012-05-01
Although history of science is a potential resource for instructional materials, teachers do not have a tendency to use historical materials in their lessons. Studies showed that instructional materials should be adaptable and consistent with curriculum. This study purports to examine the alignment between history of science and the curriculum in the light of the facilitator model on the use of history of science in science teaching, and to expose possible difficulties in preparing historical materials. For this purpose, qualitative content analysis method was employed. Codes and themes were defined beforehand, with respect to levels and their sublevels of the model. The analysis revealed several problems with the alignment of historical sources for the physics curriculum: limited information about scientists' personal lives, the difficulty of linking with content knowledge, the lack of emphasis on scientific process in the physics curriculum, differences between chronology and sequence of topics, the lack of information about scientists' reasoning. Based on the findings of the analysis, it would be difficult to use original historical sources; educators were needed to simplify historical knowledge within a pedagogical perspective. There is a need for historical sources, like Harvard Case Histories in Experimental Science, since appropriate historical information to the curriculum objectives can only be obtained by simplifying complex information at the origin. The curriculum should leave opportunities for educators interested in history of science, even historical sources provides legitimate amount of information for every concepts in the curriculum.
An Evolving Worldview: Making Open Source Easy
NASA Astrophysics Data System (ADS)
Rice, Z.
2017-12-01
NASA Worldview is an interactive interface for browsing full-resolution, global satellite imagery. Worldview supports an open data policy so that academia, private industries and the general public can use NASA's satellite data to address Earth science related issues. Worldview was open sourced in 2014. By shifting to an open source approach, the Worldview application has evolved to better serve end-users. Project developers are able to have discussions with end-users and community developers to understand issues and develop new features. Community developers are able to track upcoming features, collaborate on them and make their own contributions. Developers who discover issues are able to address those issues and submit a fix. This reduces the time it takes for a project developer to reproduce an issue or develop a new feature. Getting new developers to contribute to the project has been one of the most important and difficult aspects of open sourcing Worldview. After witnessing potential outside contributors struggle, a focus has been made on making the installation of Worldview simple to reduce the initial learning curve and make contributing code easy. One way we have addressed this is through a simplified setup process. Our setup documentation includes a set of prerequisites and a set of straightforward commands to clone, configure, install and run. This presentation will emphasize our focus to simplify and standardize Worldview's open source code so that more people are able to contribute. The more people who contribute, the better the application will become over time.
Modified Off-Midline Closure of Pilonidal Sinus Disease
Saber, Aly
2014-01-01
Background: Numerous surgical procedures have been described for pilonidal sinus disease, but treatment failure and disease recurrence are frequent. Conventional off-midline flap closures have relatively favorable surgical outcomes, but relatively unfavorable cosmetic outcomes. Aim: The author reported outcomes of a new simplified off-midline technique for closure of the defect after complete excision of the sinus tracts. Patients and Methods: Two hundred patients of both sexes were enrolled for modified D-shaped excisions were used to include all sinuses and their ramifications, with a simplified procedure to close the defect. Results: The overall wound infection rate was 12%, (12.2% for males and 11.1% for females). Wound disruption was necessitating laying the whole wound open and management as open technique. The overall wound disruption rate was 6%, (6.1% for males and 5.5% for females) and the overall recurrence rate was 7%. Conclusion: Our simplified off-midline closure without flap appeared to be comparable to conventional off-midline closure with flap, in terms of wound infection, wound dehiscence, and recurrence. Advantages of the simplified procedure include potentially reduced surgery complexity, reduced surgery time, and improved cosmetic outcome. PMID:24926445
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
Mass Media and Socialization: A Selected Bibliography.
ERIC Educational Resources Information Center
Gordon, Thomas F.; Verna, Mary Ellen
Given the growing interest in the area of socialization and mass media, this bibliography of articles and books is intended to simplify reference to the literature and to stimulate research. Three primary sources are used: "Psychological Abstracts (1950-1972),""Sociological Abstracts (1953-1972)," and the "Cumulative Book…
A 2005 biomass burning (wildfire, prescribed, and agricultural) emission inventory has been developed for the contiguous United States using a newly developed simplified method of combining information from multiple sources for use in the US EPA’s national Emission Inventory (NEI...
Tenke, Craig E.; Kayser, Jürgen
2012-01-01
The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039
Comprehensive Numerical Simulation of Filling and Solidification of Steel Ingots
Pola, Annalisa; Gelfi, Marcello; La Vecchia, Giovina Marina
2016-01-01
In this paper, a complete three-dimensional numerical model of mold filling and solidification of steel ingots is presented. The risk of powder entrapment and defects formation during filling is analyzed in detail, demonstrating the importance of using a comprehensive geometry, with trumpet and runner, compared to conventional simplified models. By using a case study, it was shown that the simplified model significantly underestimates the defects sources, reducing the utility of simulations in supporting mold and process design. An experimental test was also performed on an instrumented mold and the measurements were compared to the calculation results. The good agreement between calculation and trial allowed validating the simulation. PMID:28773890
The Profit-Maximizing Firm: Old Wine in New Bottles.
ERIC Educational Resources Information Center
Felder, Joseph
1990-01-01
Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)
A simplified indexing of F-region geophysical noise at low latitudes
NASA Technical Reports Server (NTRS)
Aggarwal, S.; Lakshmi, D. R.; Reddy, B. M.
1979-01-01
A simple method of deriving an F-region index that can warn the prediction users at low latitudes as to the specific months when they have to be more careful in using the long term predictions is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, M.; Davis, P.
LLNL and Optiphase resarched fiber optic based fluorescence lifetime instrumentation which, through the incorporation of innovative technology supplied by Optiphase Inc., could lead to a reliable, simplified, and low-cost system mutually compatible with future interests of the company and the long-term stability requirements of ESP sensors.
On the joint inversion of geophysical data for models of the coupled core-mantle system
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1991-01-01
Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.
A VLSI architecture for simplified arithmetic Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
CHEMICAL MARKERS OF HUMAN WASTE CONTAMINATION IN SOURCE WATERS: A SIMPLIFIED ANALYTICAL APPROACH
Giving public water authorities a tool to monitor and measure levels of human waste contamination of waters simply and rapidly would enhance public protection. This methodology, using both urobilin and azithromycin (or any other human-use pharmaceutical) could be used to give pub...
Computational Fluids Domain Reduction to a Simplified Fluid Network
2012-04-19
readily available read/ write software library. Code components from the open source projects OpenFoam and Paraview were explored for their adaptability...to the project. Both Paraview and OpenFoam read polyhedral mesh. OpenFoam does not read results data. Paraview actually allows for user “filters
The Big Book of Library Grant Money.
ERIC Educational Resources Information Center
Taft Group, Rockville, MD.
Libraries facing diminishing budgets and increasing demand for services must explore all funding sources, especially the more than $6 billion available in annual foundation and corporate giving. The easier and greater access to information on prospective givers provided simplifies this task. It profiles 1,471 library grant givers, compiled from…
NASA Astrophysics Data System (ADS)
Krzempek, K.; Abramski, K. M.; Nikodem, M.
2017-09-01
A widely tunable, fully monolithic, mid-infrared difference frequency generation source and its application in the dispersion-spectroscopy-based laser trace gas detection of methane and ethane, near 2938 and 2998 cm-1, is presented. Utilizing a fiber pigtailed nonlinear crystal module radically simplified the optical setup, while maintaining a superb conversion efficiency of 20% W-1. Seeded directly from two laser diodes, the source delivered ~0.5 mW of tunable radiation, which was used in a chirped laser dispersion spectroscopy setup, enabling the highly sensitive detection of hydrocarbons.
A Framework for Empirical Discovery.
1986-09-24
history of science reveal distinct classes of defined terms. Some systems have focused on one subset of these classes, while other programs have...the operators in detail, presenting examples of each from the history of science . 2.1 Defining Numeric Terms The most obvious operator for defining...laws; they can also simplify the process of discovering such laws. Let us consider some examples from the history of science in which the definition of
A Simple Explanation of Complexation
ERIC Educational Resources Information Center
Elliott, J. Richard
2010-01-01
The topics of solution thermodynamics, activity coefficients, and complex formation are introduced through computational exercises and sample applications. The presentation is designed to be accessible to freshmen in a chemical engineering computations course. The MOSCED model is simplified to explain complex formation in terms of hydrogen…
Effect of an overhead shield on gamma-ray skyshine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stedry, M.H.; Shultis, J.K.; Faw, R.E.
1996-06-01
A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less
Planets as background noise sources in free space optical communications
NASA Technical Reports Server (NTRS)
Katz, J.
1986-01-01
Background noise generated by planets is the dominant noise source in most deep space direct detection optical communications systems. Earlier approximate analyses of this problem are based on simplified blackbody calculations and can yield results that may be inaccurate by up to an order of magnitude. Various other factors that need to be taken into consideration, such as the phase angle and the actual spectral dependence of the planet albedo, in order to obtain a more accurate estimate of the noise magnitude are examined.
Herbst, Christian T; Oh, Jinook; Vydrová, Jitka; Švec, Jan G
2015-07-01
In this short report we introduce DigitalVHI, a free open-source software application for obtaining Voice Handicap Index (VHI) and other questionnaire data, which can be put on a computer in clinics and used in clinical practice. The software can simplify performing clinical studies since it makes the VHI scores directly available for analysis in a digital form. It can be downloaded from http://www.christian-herbst.org/DigitalVHI/.
Analysis of time-of-flight spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, E.M.; Foxon, C.T.; Zhang, J.
1990-07-01
A simplified method of data analysis for time of flight measurements of the velocity of molecular beams sources is described. This method does not require the complex data fitting previously used in such studies. The method is applied to the study of Pb molecular beams from a true Knudsen source and has been used to show that a VG Quadrupoles SXP300H mass spectrometer, when fitted with an open cross-beam ionizer, acts as an ideal density detector over a wide range of operating conditions.
Westgate, John N; Wania, Frank
2011-10-15
Air mass origin as determined by back trajectories often aids in explaining some of the short-term variability in the atmospheric concentrations of semivolatile organic contaminants. Airsheds, constructed by amalgamating large numbers of back trajectories, capture average air mass origins over longer time periods and thus have found use in interpreting air concentrations obtained by passive air samplers. To explore some of their key characteristics, airsheds for 54 locations on Earth were constructed and compared for roundness, seasonality, and interannual variability. To avoid the so-called "pole problem" and to simplify the calculation of roundness, a "geodesic grid" was used to bin the back-trajectory end points. Departures from roundness were seen to occur at all latitudes and to correlate significantly with local slope but no strong relationship between latitude and roundness was revealed. Seasonality and interannual variability vary widely enough to imply that static models of transport are not sufficient to describe the proximity of an area to potential sources of contaminants. For interpreting an air measurement an airshed should be generated specifically for the deployment time of the sampler, especially when investigating long-term trends. Samples taken in a single season may not represent the average annual atmosphere, and samples taken in linear, as opposed to round, airsheds may not represent the average atmosphere in the area. Simple methods are proposed to ascertain the significance of an airshed or individual cell. It is recommended that when establishing potential contaminant source regions only end points with departure heights of less than ∼700 m be considered.
Pteros: fast and easy to use open-source C++ library for molecular analysis.
Yesylevskyy, Semen O
2012-07-15
An open-source Pteros library for molecular modeling and analysis of molecular dynamics trajectories for C++ programming language is introduced. Pteros provides a number of routine analysis operations ranging from reading and writing trajectory files and geometry transformations to structural alignment and computation of nonbonded interaction energies. The library features asynchronous trajectory reading and parallel execution of several analysis routines, which greatly simplifies development of computationally intensive trajectory analysis algorithms. Pteros programming interface is very simple and intuitive while the source code is well documented and easily extendible. Pteros is available for free under open-source Artistic License from http://sourceforge.net/projects/pteros/. Copyright © 2012 Wiley Periodicals, Inc.
Simplified models of flue instruments: Influence of mouth geometry on the sound source
NASA Astrophysics Data System (ADS)
Dequand, S.; Willems, J. F. H.; Leroux, M.; Vullings, R.; van Weert, M.; Thieulot, C.; Hirschberg, A.
2003-03-01
Flue instruments such as the recorder flute and the transverse flute have different mouth geometries and acoustical response. The effect of the mouth geometry is studied by considering the aeroacoustical response of a simple whistle. The labium of a transverse flute has a large edge angle (60°) compared to that of a recorder flute (15°). Furthermore, the ratio W/h of the mouth width W to the jet thickness h can be varied in the transverse flute (lips of the musician) while it is fixed to a value W/h~4 in a recorder flute. A systematic experimental study of the steady oscillation behavior has been carried out. Results of acoustical pressure measurements and flow visualization are presented. The sharp edge of the recorder provides a sound source which is rich in harmonics at the cost of stability. The larger angle of the labium of the flute seems to be motivated by a better stability of the oscillations for thick jets but could also be motivated by a reduction of broadband turbulence noise. We propose two simplified sound source models which could be used for sound synthesis: a jet-drive model for W/h>2 and a discrete-vortex model for W/h<2.
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
Treatment of solid tumors by interstitial release of recoiling short-lived alpha emitters
NASA Astrophysics Data System (ADS)
Arazi, L.; Cooks, T.; Schmidt, M.; Keisari, Y.; Kelson, I.
2007-08-01
A new method utilizing alpha particles to treat solid tumors is presented. Tumors are treated with interstitial radioactive sources which continually release short-lived alpha emitting atoms from their surface. The atoms disperse inside the tumor, delivering a high dose through their alpha decays. We implement this scheme using thin wire sources impregnated with 224Ra, which release by recoil 220Rn, 216Po and 212Pb atoms. This work aims to demonstrate the feasibility of our method by measuring the activity patterns of the released radionuclides in experimental tumors. Sources carrying 224Ra activities in the range 10-130 kBq were used in experiments on murine squamous cell carcinoma tumors. These included gamma spectroscopy of the dissected tumors and major organs, Fuji-plate autoradiography of histological tumor sections and tissue damage detection by Hematoxylin-Eosin staining. The measurements focused on 212Pb and 212Bi. The 220Rn/216Po distribution was treated theoretically using a simple diffusion model. A simplified scheme was used to convert measured 212Pb activities to absorbed dose estimates. Both physical and histological measurements confirmed the formation of a 5-7 mm diameter necrotic region receiving a therapeutic alpha-particle dose around the source. The necrotic regions shape closely corresponded to the measured activity patterns. 212Pb was found to leave the tumor through the blood at a rate which decreased with tumor mass. Our results suggest that the proposed method, termed DART (diffusing alpha-emitters radiation therapy), may potentially be useful for the treatment of human patients.
NASA Technical Reports Server (NTRS)
Powell, W. B.
1973-01-01
Thrust chamber performance is evaluated in terms of an analytical model incorporating all the loss processes that occur in a real rocket motor. The important loss processes in the real thrust chamber were identified, and a methodology and recommended procedure for predicting real thrust chamber vacuum specific impulse were developed. Simplified equations for the calculation of vacuum specific impulse are developed to relate the delivered performance (both vacuum specific impulse and characteristic velocity) to the ideal performance as degraded by the losses corresponding to a specified list of loss processes. These simplified equations enable the various performance loss components, and the corresponding efficiencies, to be quantified separately (except that interaction effects are arbitrarily assigned in the process). The loss and efficiency expressions presented can be used to evaluate experimentally measured thrust chamber performance, to direct development effort into the areas most likely to yield improvements in performance, and as a basis to predict performance of related thrust chamber configurations.
Discontinuous Galerkin Methods for NonLinear Differential Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy; Mansour, Nagi (Technical Monitor)
2001-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.
Stecher, David; Bronkers, Glenn; Noest, Jappe O.T.; Tulleken, Cornelis A.F.; Hoefer, Imo E.; van Herwerden, Lex A.; Pasterkamp, Gerard; Buijsrogge, Marc P.
2014-01-01
To simplify and facilitate beating heart (i.e., off-pump), minimally invasive coronary artery bypass surgery, a new coronary anastomotic connector, the Trinity Clip, is developed based on the excimer laser-assisted nonocclusive anastomosis technique. The Trinity Clip connector enables simplified, sutureless, and nonocclusive connection of the graft to the coronary artery, and an excimer laser catheter laser-punches the opening of the anastomosis. Consequently, owing to the complete nonocclusive anastomosis construction, coronary conditioning (i.e., occluding or shunting) is not necessary, in contrast to the conventional anastomotic technique, hence simplifying the off-pump bypass procedure. Prior to clinical application in coronary artery bypass grafting, the safety and quality of this novel connector will be evaluated in a long-term experimental porcine off-pump coronary artery bypass (OPCAB) study. In this paper, we describe how to evaluate the coronary anastomosis in the porcine OPCAB model using various techniques to assess its quality. Representative results are summarized and visually demonstrated. PMID:25490000
Mathematical and computational model for the analysis of micro hybrid rocket motor
NASA Astrophysics Data System (ADS)
Stoia-Djeska, Marius; Mingireanu, Florin
2012-11-01
The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.
Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.
2016-01-01
For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
Small Hydropower in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadjerioua, Boualem; Johnson, Kurt
Small hydropower, defined in this report as hydropower with a generating capacity of up to 10 MW typically built using existing dams, pipelines, and canals has substantial opportunity for growth. Existing small hydropower comprises about 75% of the current US hydropower fleet in terms of number of plants. The economic feasibility of developing new small hydropower projects has substantially improved recently, making small hydropower the type of new hydropower development most likely to occur. In 2013, Congress unanimously approved changes to simplify federal permitting requirements for small hydropower, lowering costs and reducing the amount of time required to receive federalmore » approvals. In 2014, Congress funded a new federal incentive payment program for hydropower, currently worth approximately 1.5 cents/kWh. Federal and state grant and loan programs for small hydropower are becoming available. Pending changes in federal climate policy could benefit all renewable energy sources, including small hydropower. Notwithstanding remaining barriers, development of new small hydropower is expected to accelerate in response to recent policy changes.« less
Psychopathology of catatonic speech disorders and the dilemma of catatonia: a selective review.
Ungvari, G S; White, E; Pang, A H
1995-12-01
Over the past decade there has been an upsurge of interest in the prevalence, nosological position, treatment response and pathophysiology of catatonia. However, the psychopathology of catatonia has received only scant attention. Once the hallmark of catatonia, speech disorders--particularly logorrhoea, verbigeration and echolalia--seem to have been neglected in modern literature. The aims of the present paper are to outline the conceptual history of catatonic speech disorders and to follow their development in contemporary clinical research. The English-language psychiatric literature for the last 60 years on logorrhoea, verbigeration and echolalia was searched through Medline and cross-referencing. Kahlbaum, Wernicke, Jaspers, Kraepelin, Bleuler, Kleist and Leonhard's oft cited classical texts supplemented the search. In contrast to classical psychopathological sources, very few recent papers were found on catatonic speech disorders. Current clinical research failed to incorporate the observations of traditional descriptive psychopathology. Modern catatonia research operates with simplified versions of psychopathological terms devised and refined by generations of classical writers.
Simplified analysis about horizontal displacement of deep soil under tunnel excavation
NASA Astrophysics Data System (ADS)
Tian, Xiaoyan; Gu, Shuancheng; Huang, Rongbin
2017-11-01
Most of the domestic scholars focus on the study about the law of the soil settlement caused by subway tunnel excavation, however, studies on the law of horizontal displacement are lacking. And it is difficult to obtain the horizontal displacement data of any depth in the project. At present, there are many formulas for calculating the settlement of soil layers. In terms of integral solutions of Mindlin classic elastic theory, stochastic medium theory, source-sink theory, the Peck empirical formula is relatively simple, and also has a strong applicability at home. Considering the incompressibility of rock and soil mass, based on the principle of plane strain, the calculation formula of the horizontal displacement of the soil along the cross section of the tunnel was derived by using the Peck settlement formula. The applicability of the formula is verified by comparing with the existing engineering cases, a simple and rapid analytical method for predicting the horizontal displacement is presented.
ERIC Educational Resources Information Center
Szymanski, Theodore
1999-01-01
Discusses a common misunderstanding demonstrated by many students in basic mathematics courses: not knowing how to properly "cancel" factors in simplifying mathematical equations. Asserts that "crossing-out" or "canceling" is not a valid mathematical operation, and that instructors should be wary about using these terms because of the ease with…
DOT National Transportation Integrated Search
2014-05-01
Different problems in straight skewed steel I-girder bridges are often associated with the methods used for detailing the cross-frames. Use of theoretical terms to describe these detailing methods and absence of complete and simplified design approac...
A simplified method for prediction of long-term prestress loss in post-tensioned concrete bridges.
DOT National Transportation Integrated Search
2006-07-01
Creep and shrinkage of concrete and relaxation of prestressing steel cause time-dependent changes in : the stresses and strains of concrete structures. These changes result in continuous reduction in the : concrete compression stresses and in the ten...
Simplified spatiotemporal electromagnetic induction - salinity multi-field calibration
USDA-ARS?s Scientific Manuscript database
Salinity-affected farmlands are common in arid and semi-arid regions. To assure long-term sustainability of farming practices in these areas, soil salinity (ECe) should be routinely mapped and monitored. Salinity can be measured through soil sampling directed by geospatial measurements of apparent s...
The study of the Boltzmann equation of solid-gas two-phase flow with three-dimensional BGK model
NASA Astrophysics Data System (ADS)
Liu, Chang-jiang; Pang, Song; Xu, Qiang; He, Ling; Yang, Shao-peng; Qing, Yun-jie
2018-06-01
The motion of many solid-gas two-phase flows can be described by the Boltzmann equation. In order to simplify the Boltzmann equation, the convective-diffusion term is reserved and the collision term is replaced by the three-dimensional Bharnagar-Gross-Krook (BGK) model. Then the simplified Boltzmann equation is solved by homotopy perturbation method (HPM), and its approximate analytical solution is obtained. Through the analyzing, it is proved that the analytical solution satisfies all the constraint conditions, and its formation is in accord with the formation of the solution that is obtained by traditional Chapman-Enskog method, and the solving process of HPM is much more simple and convenient. This preliminarily shows the effectiveness and rapidness of HPM to solve the Boltzmann equation. The results obtained herein provide some theoretical basis for the further study of dynamic model of solid-gas two-phase flows, such as the sturzstrom of high-speed distant landslide caused by microseism and the sand storm caused by strong breeze.
SPARQL Assist language-neutral query composer
2012-01-01
Background SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. Results We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. Conclusions To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources. PMID:22373327
SPARQL assist language-neutral query composer.
McCarthy, Luke; Vandervalk, Ben; Wilkinson, Mark
2012-01-25
SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources.
Translation invariant time-dependent massive gravity: Hamiltonian analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mourad, Jihad; Steer, Danièle A.; Noui, Karim, E-mail: mourad@apc.univ-paris7.fr, E-mail: karim.noui@lmpt.univ-tours.fr, E-mail: steer@apc.univ-paris7.fr
2014-09-01
The canonical structure of the massive gravity in the first order moving frame formalism is studied. We work in the simplified context of translation invariant fields, with mass terms given by general non-derivative interactions, invariant under the diagonal Lorentz group, depending on the moving frame as well as a fixed reference frame. We prove that the only mass terms which give 5 propagating degrees of freedom are the dRGT mass terms, namely those which are linear in the lapse. We also complete the Hamiltonian analysis with the dynamical evolution of the system.
Towards a first design of a Newtonian-noise cancellation system for Advanced LIGO
NASA Astrophysics Data System (ADS)
Coughlin, M.; Mukund, N.; Harms, J.; Driggers, J.; Adhikari, R.; Mitra, S.
2016-12-01
Newtonian gravitational noise from seismic fields is predicted to be a limiting noise source at low frequency for second generation gravitational-wave detectors. Mitigation of this noise will be achieved by Wiener filtering using arrays of seismometers deployed in the vicinity of all test masses. In this work, we present optimized configurations of seismometer arrays using a variety of simplified models of the seismic field based on seismic observations at LIGO Hanford. The model that best fits the seismic measurements leads to noise reduction limited predominantly by seismometer self-noise. A first simplified design of seismic arrays for Newtonian-noise cancellation at the LIGO sites is presented, which suggests that it will be sufficient to monitor surface displacement inside the buildings.
Orbital transfer vehicle launch operations study: Automated technology knowledge base, volume 4
NASA Technical Reports Server (NTRS)
1986-01-01
A simplified retrieval strategy for compiling automation-related bibliographies from NASA/RECON is presented. Two subsets of NASA Thesaurus subject terms were extracted: a primary list, which is used to obtain an initial set of citations; and a secondary list, which is used to limit or further specify a large initial set of citations. These subject term lists are presented in Appendix A as the Automated Technology Knowledge Base (ATKB) Thesaurus.
Li, Kun-tai; Wie, Sai-jin; Huang, Lin; Cheng, Xin
2012-02-01
The scale-up strategy for acarbose fermentation by Actinoplanes sp. A56 was explored in this paper. The results obtained in shake-flask cultivation demonstrated that the ratio of maltose and glucose had significant effects on the biosynthesis of acarbose, and the feeding medium containing 3:1 (mass ratio) of maltose and glucose was favorable for acarbose production. Then the correlation of the carbon source concentration with acarbose production was further investigated in 100-l fermenter, and the results showed that 7.5-8.0 g of total sugar/100 ml and 4.0-4.5 g of reducing sugar/100 ml were optimal for acarbose production. Based on the results in 100-l fermenter, an effective and simplified scale-up strategy was successfully established for acarbose fermentation in a 30-m(3) fermenter, by using total sugar and reducing sugar as the scale-up parameter. As a result, 4,327 mg of acarbose/l was obtained at 168 h of fermentation.
A theoretical study for the propagation of rolling noise over a porous road pavement
NASA Astrophysics Data System (ADS)
Keung Lui, Wai; Ming Li, Kai
2004-07-01
A simplified model based on the study of sound diffracted by a sphere is proposed for investigating the propagation of noise in a hornlike geometry between porous road surfaces and rolling tires. The simplified model is verified by comparing its predictions with the published numerical and experimental results of studies on the horn amplification of sound over a road pavement. In a parametric study, a point monopole source is assumed to be localized on the surface of a tire. In the frequency range of interest, a porous road pavement can effectively reduce the level of amplified sound due to the horn effect. It has been shown that an increase in the thickness and porosity of a porous layer, or the use of a double layer of porous road pavement, attenuates the horn amplification of sound. However, a decrease in the flow resistivity of a porous road pavement does little to reduce the horn amplification of sound. It has also been demonstrated that the horn effect over a porous road pavement is less dependent on the angular position of the source on the surface of tires.
IdentiPy: An Extensible Search Engine for Protein Identification in Shotgun Proteomics.
Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V
2018-06-18
We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel "autotune" feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Autotune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the autotune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in postprocessing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms and allows customization of the search workflow for specialized applications.
Breakdown simulations in a focused microwave beam within the simplified model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, V. E.; Rakova, E. I.; Glyavin, M. Yu.
2016-07-15
The simplified model is proposed to simulate numerically air breakdown in a focused microwave beam. The model is 1D from the mathematical point of view, but it takes into account the spatial non-uniformity of microwave field amplitude along the beam axis. The simulations are completed for different frequencies and different focal lengths of microwave beams. The results demonstrate complicated regimes of the breakdown evolution which represents a series of repeated ionization waves. These waves start at the focal point and propagate towards incident microwave radiation. The ionization wave parameters vary during propagation. At relatively low frequencies, the propagation regime ofmore » subsequent waves can also change qualitatively. Each next ionization wave is less pronounced than the previous one, and the breakdown evolution approaches the steady state with relatively small plasma density. The ionization wave parameters are sensitive to the weak source of external ionization, but the steady state is independent on such a source. As the beam focal length decreases, the stationary plasma density increases and the onset of the steady state occurs faster.« less
NASA Technical Reports Server (NTRS)
Lester, H. C.; Posey, J. W.
1976-01-01
A discrete frequency study is made of the influence of source characteristics on the optimal properties of acoustically lined uniform and two section ducts. Two simplified sources, a plane wave and a monopole, are considered in some detail and over a greater frequency range than has been previously studied. Source and termination impedance effects are given limited examination. An example of a turbomachinery source and three associated source variants is also presented. Optimal liner designs based on modal theory approach the Cremer criterion at low frequencies and the geometric acoustics limit at high frequencies. Over an intermediate frequency range, optimal two section liners produced higher transmission losses than did the uniform configurations. Source distribution effects were found to have a significant effect on optimal liner design, but source and termination impedance effects appear to be relatively unimportant.
Recent progress on monolithic fiber amplifiers for next generation of gravitational wave detectors
NASA Astrophysics Data System (ADS)
Wellmann, Felix; Booker, Phillip; Hochheim, Sven; Theeg, Thomas; de Varona, Omar; Fittkau, Willy; Overmeyer, Ludger; Steinke, Michael; Weßels, Peter; Neumann, Jörg; Kracht, Dietmar
2018-02-01
Single-frequency fiber amplifiers in MOPA configuration operating at 1064 nm (Yb3+) and around 1550 nm (Er3+ or Er3+:Yb3+) are promising candidates to fulfill the challenging requirements of laser sources of the next generation of interferometric gravitational wave detectors (GWDs). Most probably, the next generation of GWDs is going to operate not only at 1064 nm but also at 1550 nm to cover a broader range of frequencies in which gravitational waves are detectable. We developed an engineering fiber amplifier prototype at 1064 nm emitting 215 W of linearly-polarized light in the TEM00 mode. The system consists of three modules: the seed source, the pre-amplifier, and the main amplifier. The modular design ensures reliable long-term operation, decreases system complexity and simplifies repairing and maintenance procedures. It also allows for the future integration of upgraded fiber amplifier systems without excessive downtimes. We also developed and characterized a fiber amplifier prototype at around 1550 nm that emits 100 W of linearly-polarized light in the TEM00 mode. This prototype uses an Er3+:Yb3+ codoped fiber that is pumped off-resonant at 940 nm. The off-resonant pumping scheme improves the Yb3+-to-Er3+ energy transfer and prevents excessive generation of Yb3+-ASE.
On the coupling of fluid dynamics and electromagnetism at the top of the earth's core
NASA Technical Reports Server (NTRS)
Benton, E. R.
1985-01-01
A kinematic approach to short-term geomagnetism has recently been based upon pre-Maxwell frozen-flux electromagnetism. A complete dynamic theory requires coupling fluid dynamics to electromagnetism. A geophysically plausible simplifying assumption for the vertical vorticity balance, namely that the vertical Lorentz torque is negligible, is introduced and its consequences are developed. The simplified coupled magnetohydrodynamic system is shown to conserve a variety of magnetic and vorticity flux integrals. These provide constraints on eligible models for the geomagnetic main field, its secular variation, and the horizontal fluid motions at the top of the core, and so permit a number of tests of the underlying assumptions.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
Simplifying field-scale assessment of spatiotemporal changes of soil salinity
USDA-ARS?s Scientific Manuscript database
Monitoring soil salinity (ECe) is important to properly plan agronomic and irrigation practices. Salinity can be readily measured through soil sampling directed by geospatial measurements of apparent soil electrical conductivity (ECa). Using data from a long-term (1999-2012) monitoring study at a 32...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xibing; Dong, Longjun, E-mail: csudlj@163.com; Australian Centre for Geomechanics, The University of Western Australia, Crawley, 6009
This paper presents an efficient closed-form solution (ECS) for acoustic emission(AE) source location in three-dimensional structures using time difference of arrival (TDOA) measurements from N receivers, N ≥ 6. The nonlinear location equations of TDOA are simplified to linear equations. The unique analytical solution of AE sources for unknown velocity system is obtained by solving the linear equations. The proposed ECS method successfully solved the problems of location errors resulting from measured deviations of velocity as well as the existence and multiplicity of solutions induced by calculations of square roots in existed close-form methods.
NASA Astrophysics Data System (ADS)
Ohtsu, Masayasu
1991-04-01
An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C; Lin, H; Chuang, K
2016-06-15
Purpose: To monitor the activity distribution and needle position during and after implantation in operating rooms. Methods: Simulation studies were conducted to assess the feasibility of measurement activity distribution and seed localization using the DuPECT system. The system consists of a LaBr3-based probe and planar detection heads, a collimation system, and a coincidence circuit. The two heads can be manipulated independently. Simplified Yb-169 brachytherapy seeds were used. A water-filled cylindrical phantom with a 40-mm diameter and 40-mm length was used to model a simplified prostate of the Asian man. Two simplified seeds were placed at a radial distance of 10more » mm and tangential distance of 10 mm from the center of the phantom. The probe head was arranged perpendicular to the planar head. Results of various imaging durations were analyzed and the accuracy of the seed localization was assessed by calculating the centroid of the seed. Results: The reconstructed images indicate that the DuPECT can measure the activity distribution and locate the seeds dwelt in different positions intraoperatively. The calculated centroid on average turned out to be accurate within the pixel size of 0.5 mm. The two sources were identified when the duration is longer than 15 s. The sensitivity measured in water was merely 0.07 cps/MBq. Conclusion: Preliminary results show that the measurement of the activity distribution and seed localization are feasible using the DuPECT system intraoperatively. It indicates the DuPECT system has potential to be an approach for dose-distribution-validation. The efficacy of acvtivity distribution measurement and source localization using the DuPECT system will evaluated in more realistic phantom studies (e.g., various attenuation materials and greater number of seeds) in the future investigation.« less
A moist Boussinesq shallow water equations set for testing atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerroukat, M., E-mail: mohamed.zerroukat@metoffice.gov.uk; Allen, T.
The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allowmore » the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact through a simplified yet realistic phase-change model. • This model is a unique tool to test numerical methods for atmospheric models, and physics–dynamics coupling, in a very realistic and simple way.« less
de Vargas-Sansalvador, I M Pérez; Fay, C; Phelan, T; Fernández-Ramos, M D; Capitán-Vallvey, L F; Diamond, D; Benito-Lopez, F
2011-08-12
A new system for CO(2) measurement (0-100%) based on a paired emitter-detector diode arrangement as a colorimetric detection system is described. Two different configurations were tested: configuration 1 (an opposite side configuration) where a secondary inner-filter effect accounts for CO(2) sensitivity. This configuration involves the absorption of the phosphorescence emitted from a CO(2)-insensitive luminophore by an acid-base indicator and configuration 2 wherein the membrane containing the luminophore is removed, simplifying the sensing membrane that now only contains the acid-base indicator. In addition, two different instrumental configurations have been studied, using a paired emitter-detector diode system, consisting of two LEDs wherein one is used as the light source (emitter) and the other is used in reverse bias mode as the light detector. The first configuration uses a green LED as emitter and a red LED as detector, whereas in the second case two identical red LEDs are used as emitter and detector. The system was characterised in terms of sensitivity, dynamic response, reproducibility, stability and temperature influence. We found that configuration 2 presented a better CO(2) response in terms of sensitivity. Copyright © 2011 Elsevier B.V. All rights reserved.
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
Swain, Eric D.; Decker, Jeremy D.; Hughes, Joseph D.
2014-01-01
In this paper, the authors present an analysis of the magnitude of the temporal and spatial acceleration (inertial) terms in the surface-water flow equations and determine the conditions under which these inertial terms have sufficient magnitude to be required in the computations. Data from two South Florida field sites are examined and the relative magnitudes of temporal acceleration, spatial acceleration, and the gravity and friction terms are compared. Parameters are derived by using dimensionless numbers and applied to quantify the significance of the hydrodynamic effects. The time series of the ratio of the inertial and gravity terms from field sites are presented and compared with both a simplified indicator parameter and a more complex parameter called the Hydrodynamic Significance Number (HSN). Two test-case models were developed by using the SWIFT2D hydrodynamic simulator to examine flow behavior with and without the inertial terms and compute the HSN. The first model represented one of the previously-mentioned field sites during gate operations of a structure-managed coastal canal. The second model was a synthetic test case illustrating the drainage of water down a sloped surface from an initial stage while under constant flow. The analyses indicate that the times of substantial hydrodynamic effects are sporadic but significant. The simplified indicator parameter correlates much better with the hydrodynamic effect magnitude for a constant width channel such as Miami Canal than at the non-uniform North River. Higher HSN values indicate flow situations where the inertial terms are large and need to be taken into account.
ERIC Educational Resources Information Center
Lee, Liangshiu
2010-01-01
The basis sets for symmetry operations of d[superscript 1] to d[superscript 9] complexes in an octahedral field and the resulting terms are derived for the ground states and spin-allowed excited states. The basis sets are of fundamental importance in group theory. This work addresses such a fundamental issue, and the results are pedagogically…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haverkamp, B.; Krone, J.; Shybetskyi, I.
2013-07-01
The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safetymore » analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)« less
Arora, Amit; Al-Salti, Ibrahim; Murad, Hussam; Tran, Quang; Itaoui, Rhonda; Bhole, Sameer; Ajwani, Shilpi; Jones, Charlotte; Manohar, Narendar
2018-01-10
The purpose of this study was to gain an in-depth understanding of Arabic-speaking mothers views on the usefulness of existing oral health education leaflets aimed at young children and also to record their views on the tailored versions of these leaflets. This qualitative study was nested within a large ongoing birth cohort study in South Western Sydney, Australia. Arabic-speaking mothers (n = 19) with young children were purposively selected and approached for a semi-structured interview. Two original English leaflets giving advice on young children's oral health were sent to mother's prior to the interview. On the day of interview, mothers were given simplified-English and Arabic versions of both the leaflets and were asked to compare the three versions. Interviews were audio-recorded, subsequently transcribed verbatim and analysed by thematic analysis. Ethical approval was obtained from Human Research Ethics Committees of the former Sydney South West Area Health Service, University of Sydney and Western Sydney University. Mothers reported that simplified English together with the Arabic version of the leaflets were useful sources of information. Although many mothers favoured the simplified version over original English leaflets, the majority favoured the leaflets in Arabic. Ideally, a "dual Arabic - simplified English leaflet" was preferred. The understanding of key health messages was optimised through a simple layout and visual images. There is a need to tailor oral health education leaflets for Arabic-speaking migrants. Producers of dental leaflets should also consider a "dual Arabic - simplified English leaflet" to improve oral health knowledge of Arabic-speaking migrants. The use of simple layout and pictures assists Arabic-speaking migrants to understand the content of dental leaflets.
Zhao, Feng; Zhou, Jidong; Han, Siqin; Ma, Fang; Zhang, Ying; Zhang, Jie
2016-04-01
Aerobic production of rhamnolipid by Pseudomonas aeruginosa was extensively studied. But effect of medium composition on anaerobic production of rhamnolipid by P. aeruginosa was unknown. A simplifying medium facilitating anaerobic production of rhamnolipid is urgently needed for in situ microbial enhanced oil recovery (MEOR). Medium factors affecting anaerobic production of rhamnolipid were investigated using P. aeruginosa SG (Genbank accession number KJ995745). Medium composition for anaerobic production of rhamnolipid by P. aeruginosa is different from that for aerobic production of rhamnolipid. Both hydrophobic substrate and organic nitrogen inhibited rhamnolipid production under anaerobic conditions. Glycerol and nitrate were the best carbon and nitrogen source. The commonly used N limitation under aerobic conditions was not conducive to rhamnolipid production under anaerobic conditions because the initial cell growth demanded enough nitrate for anaerobic respiration. But rhamnolipid was also fast accumulated under nitrogen starvation conditions. Sufficient phosphate was needed for anaerobic production of rhamnolipid. SO4(2-) and Mg(2+) are required for anaerobic production of rhamnolipid. Results will contribute to isolation bacteria strains which can anaerobically produce rhamnolipid and medium optimization for anaerobic production of rhamnolipid. Based on medium optimization by response surface methodology and ions composition of reservoir formation water, a simplifying medium containing 70.3 g/l glycerol, 5.25 g/l NaNO3, 5.49 g/l KH2PO4, 6.9 g/l K2HPO4·3H2O and 0.40 g/l MgSO4 was designed. Using the simplifying medium, 630 mg/l of rhamnolipid was produced by SG, and the anaerobic culture emulsified crude oil to EI24 = 82.5 %. The simplifying medium was promising for in situ MEOR applications.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Azzaino, Z.; Hoang, L.; Pacenka, S.; Worqlul, A. W.; Mukundan, R.; Stoof, C.; Owens, E. M.; Richards, B. K.
2017-12-01
The New York City source watersheds in the Catskill Mountains' humid, temperate climate has long-term hydrological and water quality monitoring data It is one of the few catchments where implementation of source and landscape management practices has led to decreased phosphorus concentration in the receiving surface waters. One of the reasons is that landscape measures correctly targeted the saturated variable source runoff areas (VSA) in the valley bottoms as the location where most of the runoff and other nonpoint pollutants originated. Measures targeting these areas were instrumental in lowering phosphorus concentration. Further improvements in water quality can be made based on a better understanding of the flow processes and water table fluctuations in the VSA. For that reason, we instrumented a self-contained upland variable source watershed with a landscape characteristic of a soil underlain by glacial till at shallow depth similar to the Catskill watersheds. In this presentation, we will discuss our experimental findings and present a mathematical model. Variable source areas have a small slope making gravity the driving force for the flow, greatly simplifying the simulation of the flow processes. The experimental data and the model simulations agreed for both outflow and water table fluctuations. We found that while the flows to the outlet were similar throughout the year, the discharge of the VSA varies greatly. This was due to transpiration by the plants which became active when soil temperatures were above 10oC. We found that shortly after the temperature increased above 10oC the baseflow stopped and only surface runoff occurred when rainstorms exceeded the storage capacity of the soil in at least a portion of the variable source area. Since plant growth in the variable source area was a major variable determining the base flow behavior, changes in temperature in the future - affecting the duration of the growing season - will affect baseflow and related transport of nutrient and other chemicals many times more than small temperature related increases in potential evaporation rate. This in turn will directly change the water availability and pollutant transport in the many surface source watersheds with variable source area hydrology.
A Simplified Model of Choice Behavior under Uncertainty
Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu
2016-01-01
The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715
The Instructional Cost Index. A Simplified Approach to Interinstitutional Cost Comparison.
ERIC Educational Resources Information Center
Beatty, George, Jr.; And Others
The paper describes a simple, yet effective method of computing a comparative index of instructional costs. The Instructional Cost Index identifies direct cost differentials among instructional programs. Cost differentials are described in terms of differences among numerical values of variables that reflect fundamental academic and resource…
Graphene shield enhanced photocathodes and methods for making the same
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Nathan Andrew
Disclosed are graphene shield enhanced photocathodes, such as high QE photocathodes. In certain embodiments, a monolayer graphene shield membrane ruggedizes a high quantum efficiency photoemission electron source by protecting a photosensitive film of the photocathode, extending operational lifetime and simplifying its integration in practical electron sources. In certain embodiments of the disclosed graphene shield enhanced photocathodes, the graphene serves as a transparent shield that does not inhibit photon or electron transmission but isolates the photosensitive film of the photocathode from reactive gas species, preventing contamination and yielding longer lifetime.
In vivo time-gated diffuse correlation spectroscopy at quasi-null source-detector separation.
Pagliazzi, M; Sekar, S Konugolu Venkata; Di Sieno, L; Colombo, L; Durduran, T; Contini, D; Torricelli, A; Pifferi, A; Mora, A Dalla
2018-06-01
We demonstrate time domain diffuse correlation spectroscopy at quasi-null source-detector separation by using a fast time-gated single-photon avalanche diode without the need of time-tagging electronics. This approach allows for increased photon collection, simplified real-time instrumentation, and reduced probe dimensions. Depth discriminating, quasi-null distance measurement of blood flow in a human subject is presented. We envision the miniaturization and integration of matrices of optical sensors of increased spatial resolution and the enhancement of the contrast of local blood flow changes.
NASA Technical Reports Server (NTRS)
Seabaugh, A. C.; Mattauch, R., J.
1983-01-01
In-place process for etching and growth of gallium arsenide calls for presaturation of etch and growth melts by arsenic source crystal. Procedure allows precise control of thickness of etch and newly grown layer on substrate. Etching and deposition setup is expected to simplify processing and improve characteristics of gallium arsenide lasers, high-frequency amplifiers, and advanced integrated circuits.
Growing Bladder-Cancer Cells In Three-Dimensional Clusters
NASA Technical Reports Server (NTRS)
Spaulding, Glenn F.; Prewett, Tacey L.; Goodwin, Thomas J.
1995-01-01
Artificial growth process helps fill gaps in cancer research. Cell cultures more accurate as models for in vivo studies and as sources of seed cells for in vivo studies. Effected in horizontal rotating bioreactor described in companion article, "Simplified Bioreactor for Growing Mammalian Cells" (MSC-22060). Provides aggregates of cells needed to fill many of gaps.
Choosing the Adequate Level of Graded Readers--Preliminary Study
ERIC Educational Resources Information Center
Prtljaga, Jelena; Palinkaševic, Radmila; Brkic, Jovana
2015-01-01
Graded readers have been used as second language teaching material since the end of the Second World War. They are an important source of simplified material which provides comprehensible input on all levels. It is of crucial importance for a successful usage of graded readers in the classroom and in studies which focus on graded readers, that an…
Recent assessments have analyzed the health impacts of PM2.5 from emissions from different locations and sectors using simplified or reduced-form air quality models. Here we present an alternative approach using the adjoint of the Community Multiscale Air Quality (CMAQ) model, wh...
An inexpensive autosampler for a DART/TOFMS provides mass spectra from analytes absorbed on 76 cotton swab, wipe samples in 7.5 min. A field sample carrier simplifies sample collection and provides swabs nearly ready for analysis to the lab. Applications of the high throughput pr...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F; Park, J; Barraclough, B
2016-06-15
Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less
Assessing uncertainty in radar measurements on simplified meteorological scenarios
NASA Astrophysics Data System (ADS)
Molini, L.; Parodi, A.; Rebora, N.; Siccardi, F.
2006-02-01
A three-dimensional radar simulator model (RSM) developed by Haase (1998) is coupled with the nonhydrostatic mesoscale weather forecast model Lokal-Modell (LM). The radar simulator is able to model reflectivity measurements by using the following meteorological fields, generated by Lokal Modell, as inputs: temperature, pressure, water vapour content, cloud water content, cloud ice content, rain sedimentation flux and snow sedimentation flux. This work focuses on the assessment of some uncertainty sources associated with radar measurements: absorption by the atmospheric gases, e.g., molecular oxygen, water vapour, and nitrogen; attenuation due to the presence of a highly reflecting structure between the radar and a "target structure". RSM results for a simplified meteorological scenario, consisting of a humid updraft on a flat surface and four cells placed around it, are presented.
compomics-utilities: an open-source Java library for computational proteomics.
Barsnes, Harald; Vaudel, Marc; Colaert, Niklaas; Helsens, Kenny; Sickmann, Albert; Berven, Frode S; Martens, Lennart
2011-03-08
The growing interest in the field of proteomics has increased the demand for software tools and applications that process and analyze the resulting data. And even though the purpose of these tools can vary significantly, they usually share a basic set of features, including the handling of protein and peptide sequences, the visualization of (and interaction with) spectra and chromatograms, and the parsing of results from various proteomics search engines. Developers typically spend considerable time and effort implementing these support structures, which detracts from working on the novel aspects of their tool. In order to simplify the development of proteomics tools, we have implemented an open-source support library for computational proteomics, called compomics-utilities. The library contains a broad set of features required for reading, parsing, and analyzing proteomics data. compomics-utilities is already used by a long list of existing software, ensuring library stability and continued support and development. As a user-friendly, well-documented and open-source library, compomics-utilities greatly simplifies the implementation of the basic features needed in most proteomics tools. Implemented in 100% Java, compomics-utilities is fully portable across platforms and architectures. Our library thus allows the developers to focus on the novel aspects of their tools, rather than on the basic functions, which can contribute substantially to faster development, and better tools for proteomics.
A Novel Low-Power-Consumption All-Fiber-Optic Anemometer with Simple System Design.
Zhang, Yang; Wang, Fang; Duan, Zhihui; Liu, Zexu; Liu, Zigeng; Wu, Zhenlin; Gu, Yiying; Sun, Changsen; Peng, Wei
2017-09-14
A compact and low-power consuming fiber-optic anemometer based on single-walled carbon nanotubes (SWCNTs) coated tilted fiber Bragg grating (TFBG) is presented. TFBG as a near infrared in-fiber sensing element is able to excite a number of cladding modes and radiation modes in the fiber and effectively couple light in the core to interact with the fiber surrounding mediums. It is an ideal in-fiber device used in a fiber hot-wire anemometer (HWA) as both coupling and sensing elements to simplify the sensing head structure. The fabricated TFBG was immobilized with an SWCNT film on the fiber surface. SWCNTs, a kind of innovative nanomaterial, were utilized as light-heat conversion medium instead of traditional metallic materials, due to its excellent infrared light absorption ability and competitive thermal conductivity. When the SWCNT film strongly absorbs the light in the fiber, the sensor head can be heated and form a "hot wire". As the sensor is put into wind field, the wind will take away the heat on the sensor resulting in a temperature variation that is then accurately measured by the TFBG. Benefited from the high coupling and absorption efficiency, the heating and sensing light source was shared with only one broadband light source (BBS) without any extra pumping laser complicating the system. This not only significantly reduces power consumption, but also simplifies the whole sensing system with lower cost. In experiments, the key parameters of the sensor, such as the film thickness and the inherent angle of the TFBG, were fully investigated. It was demonstrated that, under a very low BBS input power of 9.87 mW, a 0.100 nm wavelength response can still be detected as the wind speed changed from 0 to 2 m/s. In addition, the sensitivity was found to be -0.0346 nm/(m/s) under the wind speed of 1 m/s. The proposed simple and low-power-consumption wind speed sensing system exhibits promising potential for future long-term remote monitoring and on-chip sensing in practical applications.
Liao, Renkuan; Yang, Peiling; Wu, Wenyong; Ren, Shumei
2016-01-01
The widespread use of superabsorbent polymers (SAPs) in arid regions improves the efficiency of local land and water use. However, SAPs’ repeated absorption and release of water has periodic and unstable effects on both soil’s physical and chemical properties and on the growth of plant roots, which complicates modeling of water movement in SAP-treated soils. In this paper, we proposea model of soil water movement for SAP-treated soils. The residence time of SAP in the soil and the duration of the experiment were considered as the same parameter t. This simplifies previously proposed models in which the residence time of SAP in the soil and the experiment’s duration were considered as two independent parameters. Numerical testing was carried out on the inverse method of estimating the source/sink term of root water uptake in the model of soil water movement under the effect of SAP. The test results show that time interval, hydraulic parameters, test error, and instrument precision had a significant influence on the stability of the inverse method, while time step, layering of soil, and boundary conditions had relatively smaller effects. A comprehensive analysis of the method’s stability, calculation, and accuracy suggests that the proposed inverse method applies if the following conditions are satisfied: the time interval is between 5 d and 17 d; the time step is between 1000 and 10000; the test error is ≥ 0.9; the instrument precision is ≤ 0.03; and the rate of soil surface evaporation is ≤ 0.6 mm/d. PMID:27505000
Cable equation for general geometry
NASA Astrophysics Data System (ADS)
López-Sánchez, Erick J.; Romero, Juan M.
2017-02-01
The cable equation describes the voltage in a straight cylindrical cable, and this model has been employed to model electrical potential in dendrites and axons. However, sometimes this equation might give incorrect predictions for some realistic geometries, in particular when the radius of the cable changes significantly. Cables with a nonconstant radius are important for some phenomena, for example, discrete swellings along the axons appear in neurodegenerative diseases such as Alzheimers, Parkinsons, human immunodeficiency virus associated dementia, and multiple sclerosis. In this paper, using the Frenet-Serret frame, we propose a generalized cable equation for a general cable geometry. This generalized equation depends on geometric quantities such as the curvature and torsion of the cable. We show that when the cable has a constant circular cross section, the first fundamental form of the cable can be simplified and the generalized cable equation depends on neither the curvature nor the torsion of the cable. Additionally, we find an exact solution for an ideal cable which has a particular variable circular cross section and zero curvature. For this case we show that when the cross section of the cable increases the voltage decreases. Inspired by this ideal case, we rewrite the generalized cable equation as a diffusion equation with a source term generated by the cable geometry. This source term depends on the cable cross-sectional area and its derivates. In addition, we study different cables with swelling and provide their numerical solutions. The numerical solutions show that when the cross section of the cable has abrupt changes, its voltage is smaller than the voltage in the cylindrical cable. Furthermore, these numerical solutions show that the voltage can be affected by geometrical inhomogeneities on the cable.
Simplified models of dark matter with a long-lived co-annihilation partner
NASA Astrophysics Data System (ADS)
Khoze, Valentin V.; Plascencia, Alexis D.; Sakurai, Kazuki
2017-06-01
We introduce a new set of simplified models to address the effects of 3-point interactions between the dark matter particle, its dark co-annihilation partner, and the Standard Model degree of freedom, which we take to be the tau lepton. The contributions from dark matter co-annihilation channels are highly relevant for a determination of the correct relic abundance. We investigate these effects as well as the discovery potential for dark matter co-annihilation partners at the LHC. A small mass splitting between the dark matter and its partner is preferred by the co-annihilation mechanism and suggests that the co-annihilation partners may be long-lived (stable or meta-stable) at collider scales. It is argued that such long-lived electrically charged particles can be looked for at the LHC in searches of anomalous charged tracks. This approach and the underlying models provide an alternative/complementarity to the mono-jet and multi-jet based dark matter searches widely used in the context of simplified models with s-channel mediators. We consider four types of simplified models with different particle spins and coupling structures. Some of these models are manifestly gauge invariant and renormalizable, others would ultimately require a UV completion. These can be realised in terms of supersymmetric models in the neutralino-stau co-annihilation regime, as well as models with extra dimensions or composite models.
Realistic simplified gaugino-higgsino models in the MSSM
NASA Astrophysics Data System (ADS)
Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn
2018-03-01
We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.
Revisiting simplified dark matter models in terms of AMS-02 and Fermi-LAT
NASA Astrophysics Data System (ADS)
Li, Tong
2018-01-01
We perform an analysis of the simplified dark matter models in the light of cosmic ray observables by AMS-02 and Fermi-LAT. We assume fermion, scalar or vector dark matter particle with a leptophobic spin-0 mediator that couples only to Standard Model quarks and dark matter via scalar and/or pseudo-scalar bilinear. The propagation and injection parameters of cosmic rays are determined by the observed fluxes of nuclei from AMS-02. We find that the AMS-02 observations are consistent with the dark matter framework within the uncertainties. The AMS-02 antiproton data prefer 30 (50) GeV - 5 TeV dark matter mass and require an effective annihilation cross section in the region of 4 × 10-27 (7 × 10-27) - 4 × 10-24 cm3/s for the simplified fermion (scalar and vector) dark matter models. The cross sections below 2 × 10-26 cm3/s can evade the constraint from Fermi-LAT dwarf galaxies for about 100 GeV dark matter mass.
Advanced propulsion for LEO-Moon transport. 3: Transportation model. M.S. Thesis - California Univ.
NASA Technical Reports Server (NTRS)
Henley, Mark W.
1992-01-01
A simplified computational model of low Earth orbit-Moon transportation system has been developed to provide insight into the benefits of new transportation technologies. A reference transportation infrastructure, based upon near-term technology developments, is used as a departure point for assessing other, more advanced alternatives. Comparison of the benefits of technology application, measured in terms of a mass payback ratio, suggests that several of the advanced technology alternatives could substantially improve the efficiency of low Earth orbit-Moon transportation.
SWOT analysis of the renewable energy sources in Romania - case study: solar energy
NASA Astrophysics Data System (ADS)
Lupu, A. G.; Dumencu, A.; Atanasiu, M. V.; Panaite, C. E.; Dumitrașcu, Gh; Popescu, A.
2016-08-01
The evolution of energy sector worldwide triggered intense preoccupation on both finding alternative renewable energy sources and environmental issues. Romania is considered to have technological potential and geographical location suitable to renewable energy usage for electricity generation. But this high potential is not fully exploited in the context of policies and regulations adopted globally, and more specific, European Union (EU) environmental and energy strategies and legislation related to renewable energy sources. This SWOT analysis of solar energy source presents the state of the art, potential and future prospects for development of renewable energy in Romania. The analysis concluded that the development of solar energy sector in Romania depends largely on: viability of legislative framework on renewable energy sources, increased subsidies for solar R&D, simplified methodology of green certificates, and educating the public, investors, developers and decision-makers.
Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits
Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.
2013-01-01
Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814
DeNovoGUI: An Open Source Graphical User Interface for de Novo Sequencing of Tandem Mass Spectra
2013-01-01
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com. PMID:24295440
DeNovoGUI: an open source graphical user interface for de novo sequencing of tandem mass spectra.
Muth, Thilo; Weilnböck, Lisa; Rapp, Erdmann; Huber, Christian G; Martens, Lennart; Vaudel, Marc; Barsnes, Harald
2014-02-07
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com .
Understanding the Behavior of the Oligomeric Fractions During Pyrolysis Oils Upgrading
NASA Astrophysics Data System (ADS)
Stankovikj, Filip
Fast pyrolysis oils represent most viable renewable sources for production of fuels and chemicals, and they could supplement significant portion of the depleting fossil fuels in near future. Progress on their utilization is impeded by their thermal and storage instability, lack of understanding of their complex composition and behavior during upgrading, including the poorly described water soluble fraction (WS). This work offers two new methodologies for simplified, and sensible description of the pyrolysis oils in terms of functional groups and chemical macro-families, augments our understanding of the composition of the WS, and the behavior of the heavy non-volatile fraction during pyrolysis oils stabilization. The concept of analyzing the volatile and non-volatile fraction in terms of functional groups has been introduced, and the quantification power of spectroscopic techniques (FTIR, 1H-NMR, UV fluorescence) for phenols, carbonyl and carboxyl groups was shown. The FT-ICR-MS van Krevelen diagram revealed the importance of dehydration reactions in pyrolysis oils and the presence of "pyrolytic humins" was hypothesized. For the first time the WS was analyzed with plethora of analytical techniques. This lead to proposition of a new characterization scheme based on functional groups, describing 90-100 wt.% of the bio-oils. The structure of idealized "pyrolytic humin" was further described as a random combination of 3-8 units of dehydrated sugars, coniferyl-type phenols, furans, and carboxylic acids attached on a 2,5-dioxo-6-hydroxyhexanal (DHH) backbone rich in carbonyl groups. TG-FTIR studies resulted in defining rules for fitting pyrolysis oils' DTG curves and assignment of TG residue. This second method is reliable for estimation of water content, light volatiles, WS and WIS. Finally, stabilization of two oils was analyzed through the prism of functional groups. Carbonyl and hydroxyl groups interconverted. The first attempt to follow silent 31P-NMR oxygen was presented; the O content reduced from 6 to 2%, which correlated well with the additional water formed. The water formation increased with stabilization temperature (3 to 10%), dominated by repolymerization instead deoxygenation. This last study presents a methodological framework for analysis of pyrolysis oils hydrotreatment; it simplifies modeling of these systems, vital for further understanding of bio-oil upgrading.
Long-term strategy for the statistical design of a forest health monitoring system
Hans T. Schreuder; Raymond L. Czaplewski
1993-01-01
A conceptual framework is given for a broad-scale survey of forest health that accomplishes three objectives: generate descriptive statistics; detect changes in such statistics; and simplify analytical inferences that identify, and possibly establish cause-effect relationships. Our paper discusses the development of sampling schemes to satisfy these three objectives,...
48 CFR 52.213-4 - Terms and Conditions-Simplified Acquisitions (Other Than Commercial Items)
Code of Federal Regulations, 2013 CFR
2013-10-01
... States, Puerto Rico, or the U.S. Virgin Islands). (iv) 52.222-35, Equal Opportunity for Veterans (SEP... performed in the United States, District of Columbia, Puerto Rico, the Northern Mariana Islands, American....) (For purposes of this clause, United States includes the 50 States, the District of Columbia, Puerto...
48 CFR 52.213-4 - Terms and Conditions-Simplified Acquisitions (Other Than Commercial Items)
Code of Federal Regulations, 2011 CFR
2011-10-01
... States, Puerto Rico, or the U.S. Virgin Islands). (iii) 52.222-35, Equal Opportunity for Veterans (SEP... performed in the United States, District of Columbia, Puerto Rico, the Northern Mariana Islands, American....) (For purposes of this clause, United States includes the 50 States, the District of Columbia, Puerto...
Feynman's and Ohta's Models of a Josephson Junction
ERIC Educational Resources Information Center
De Luca, R.
2012-01-01
The Josephson equations are derived by means of the weakly coupled two-level quantum system model given by Feynman. Adopting a simplified version of Ohta's model, starting from Feynman's model, the strict voltage-frequency Josephson relation is derived. The contribution of Ohta's approach to the comprehension of the additional term given by the…
The Co-Evolution of Knowledge and Event Memory
ERIC Educational Resources Information Center
Nelson, Angela B.; Shiffrin, Richard M.
2013-01-01
We present a theoretical framework and a simplified simulation model for the co-evolution of knowledge and event memory, both termed SARKAE (Storing and Retrieving Knowledge and Events). Knowledge is formed through the accrual of individual events, a process that operates in tandem with the storage of individual event memories. In 2 studies, new…
Characterizing the discoloration of EBT3 films in solar UV A+B measurement using red LED
NASA Astrophysics Data System (ADS)
Omar, Ahmad Fairuz; Osman, Ummi Shuhada; Tan, Kok Chooi
2017-09-01
This research article proposes an alternative method to measure the discoloration or the color changes of EBT3 films due to exposure by solar ultraviolet (UV A+B) dose. Common methods to measure the color changes of EBT3 are through imaging technique measured by flatbed scanner and through absorbance spectroscopy measured by visible spectrometer. The research presented in this article measure the color changes of EBT3 through simplified optical system using the combination of light emitting diode (LED) as the light source and photodiode as the detector. In this research, 50 pieces of Gafchromic EBT3 films were prepared with the dimension of 3 cm x 2 cm. Color of the films changed from light green to dark green based on the total accumulated UV dose (mJ/cm2) by each film that depends on the duration of exposure, irradiance level (mW/cm2) and condition of the sky. The exposed films were then taken to the laboratory for its color measurement using absorbance spectroscopy technique and using newly developed simplified optical instrument using LED-photodiode. Results from spectroscopy technique indicate that wavelength within red region exhibit better response in term of linearity and responsivity towards the colors of EBT3 films. Wavelength of 626 nm was then selected as the peak emission wavelength for LED-photodiode absorbance system. UV dose measurement using LEDphotodiode system produced good result with coefficient of determination, R2 of 0.97 and root mean square of error, RMSE of 431.82 mJ/cm2 while comparatively, similar wavelength but analyzed from spectroscopy dataset produced R2 of 0.988 and RMSE of 268.94 mJ/cm2.
Vertically-integrated Approaches for Carbon Sequestration Modeling
NASA Astrophysics Data System (ADS)
Bandilla, K.; Celia, M. A.; Guo, B.
2015-12-01
Carbon capture and sequestration (CCS) is being considered as an approach to mitigate anthropogenic CO2 emissions from large stationary sources such as coal fired power plants and natural gas processing plants. Computer modeling is an essential tool for site design and operational planning as it allows prediction of the pressure response as well as the migration of both CO2 and brine in the subsurface. Many processes, such as buoyancy, hysteresis, geomechanics and geochemistry, can have important impacts on the system. While all of the processes can be taken into account simultaneously, the resulting models are computationally very expensive and require large numbers of parameters which are often uncertain or unknown. In many cases of practical interest, the computational and data requirements can be reduced by choosing a smaller domain and/or by neglecting or simplifying certain processes. This leads to a series of models with different complexity, ranging from coupled multi-physics, multi-phase three-dimensional models to semi-analytical single-phase models. Under certain conditions the three-dimensional equations can be integrated in the vertical direction, leading to a suite of two-dimensional multi-phase models, termed vertically-integrated models. These models are either solved numerically or simplified further (e.g., assumption of vertical equilibrium) to allow analytical or semi-analytical solutions. This presentation focuses on how different vertically-integrated models have been applied to the simulation of CO2 and brine migration during CCS projects. Several example sites, such as the Illinois Basin and the Wabamun Lake region of the Alberta Basin, are discussed to show how vertically-integrated models can be used to gain understanding of CCS operations.
An Extensible Sensing and Control Platform for Building Energy Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Anthony; Berges, Mario; Martin, Christopher
2016-04-03
The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.
Going to the Source: A Practical Way to Simplify the FAFSA
ERIC Educational Resources Information Center
Asher, Lauren
2007-01-01
There is widespread agreement that the complexity of the current Free Application for Federal Student Aid (FAFSA) is a barrier to college access and success. One indication is the large and growing number of lower income college students who do not apply for aid, even though they are likely eligible for a Pell grant: an estimated 1.5 million in…
ERIC Educational Resources Information Center
Harrison, Sandra; Morgan, Roger
2012-01-01
There is an increasing sensitivity to the challenges posed by the language of examination papers and of instruction in scientific subjects, especially for non-native speakers of English. It has been observed that in addition to technical subject-specific vocabulary, non-technical words such as instructional verbs have been sources of difficulty,…
Simplifying Complexity: Miriam Blake--Los Alamos National Laboratory Research Library, NM
ERIC Educational Resources Information Center
Library Journal, 2004
2004-01-01
The holy grail for many research librarians is one-stop searching: seamless access to all the library's resources on a topic, regardless of the source. Miriam Blake, Library Without Walls Project Leader at Los Alamos National laboratory (LANL), is making this vision a reality. Blake is part of a growing cadre of experts: a techie who is becoming a…
The design and construction of a cost-efficient confocal laser scanning microscope
NASA Astrophysics Data System (ADS)
Xi, Peng; Rajwa, Bartlomiej; Jones, James T.; Robinson, J. Paul
2007-03-01
The optical dissection ability of confocal microscopy makes it a powerful tool for biological materials. However, the cost and complexity of confocal scanning laser microscopy hinders its wide application in education. We describe the construction of a simplified confocal scanning laser microscope and demonstrate three-dimensional projection based on cost-efficient commercial hardware, together with available open source software.
Open source integrated modeling environment Delta Shell
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.
2012-04-01
In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.
A Landing Gear Noise Reduction Study Based on Computational Simulations
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Lockard, David P.
2006-01-01
Landing gear is one of the more prominent airframe noise sources. Techniques that diminish gear noise and suppress its radiation to the ground are highly desirable. Using a hybrid computational approach, this paper investigates the noise reduction potential of devices added to a simplified main landing gear model without small scale geometric details. The Ffowcs Williams and Hawkings equation is used to predict the noise at far-field observer locations from surface pressure data provided by unsteady CFD calculations. Because of the simplified nature of the model, most of the flow unsteadiness is restricted to low frequencies. The wheels, gear boxes, and oleo appear to be the primary sources of unsteadiness at these frequencies. The addition of fairings around the gear boxes and wheels, and the attachment of a splitter plate on the downstream side of the oleo significantly reduces the noise over a wide range of frequencies, but a dramatic increase in noise is observed at one frequency. The increased flow velocities, a consequence of the more streamlined bodies, appear to generate extra unsteadiness around other parts giving rise to the additional noise. Nonetheless, the calculations demonstrate the capability of the devices to improve overall landing gear noise.
Di Carlo, Flavia; Poletti, Stefania; Bulgarini, Alessandra; Munari, Francesca; Negri, Stefano; Stocchero, Matteo; Ceoldo, Stefania; Avesani, Linda; Assfalg, Michael; Zoccatelli, Gianni; Guzzo, Flavia
2017-01-01
Fruits of the sweet cherry (Prunus avium L.) accumulate a range of antioxidants that can help to prevent cardiovascular disease, inflammation and cancer. We tested the in vitro antioxidant activity of 18 sweet cherry cultivars collected from 12 farms in the protected geographical indication region of Marostica (Vicenza, Italy) during two growing seasons. Multiple targeted and untargeted metabolomics approaches (NMR, LC-MS, HPLC-DAD, HPLC-UV) as well as artificial simplified phytocomplexes representing the cultivars Sandra Tardiva, Sandra and Grace Star were then used to determine whether the total antioxidant activity reflected the additive effects of each compound or resulted from synergistic interactions. We found that the composition of each cultivar depended more on genetic variability than environmental factors. Furthermore, phenolic compounds were the principal source of antioxidant activity and experiments with artificial simplified phytocomplexes indicated strong synergy between the anthocyanins and quercetins/ascorbic acid specifically in the cultivar Sandra Tardiva. Our data therefore indicate that the total antioxidant activity of sweet cherry fruits may originate from cultivar-dependent interactions among different classes of metabolite. PMID:28732012
Yu, Guo-Wei; Nie, Jing; Song, Zhi-Yu; Li, Zu-Guang; Lee, Maw-Rong; Wang, Shen-Peng
2017-11-01
Simultaneous distillation extraction (SDE) is quite useful for the separation of volatile compounds from an analyte when their contents are quite low. In this study, a simplified SDE approach is applied for the extraction of essential oil from Schisandra sphenanthera, with microwave as heating source, [Bmim][Cl] as the medium for pretreatment, and gas chromatography-mass spectrometry as the analytical approach. Consequently, the improvement resulted from [Bmim][Cl] pretreatment is demonstrated by taking comparison with blank experiments. Totally 61 compounds have been detected in the essential oil obtained by using [Bmim][Cl] pretreatment, while without [Bmim][Cl] pretreatment, only 53 compounds can be detected. Moreover, [Bmim][Cl] pretreatment can also resulted in a higher yield of essential oil. The experimental results demonstrate that the simplified SDE coupled with ionic liquid pretreatment is a feasible approach for the extraction of essential oil from S. sphenanthera with high efficiency as 0.85% of essential oil yield has been obtained, and can be potentially extended to the extraction of essential oil or other target volatile compounds with low content. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Alemi-Ardakani, M.; Milani, A. S.; Yannacopoulos, S.
2014-01-01
Impact modeling of fiber reinforced polymer composites is a complex and challenging task, in particular for practitioners with less experience in advanced coding and user-defined subroutines. Different numerical algorithms have been developed over the past decades for impact modeling of composites, yet a considerable gap often exists between predicted and experimental observations. In this paper, after a review of reported sources of complexities in impact modeling of fiber reinforced polymer composites, two simplified approaches are presented for fast simulation of out-of-plane impact response of these materials considering four main effects: (a) strain rate dependency of the mechanical properties, (b) difference between tensile and flexural bending responses, (c) delamination, and (d) the geometry of fixture (clamping conditions). In the first approach, it is shown that by applying correction factors to the quasistatic material properties, which are often readily available from material datasheets, the role of these four sources in modeling impact response of a given composite may be accounted for. As a result a rough estimation of the dynamic force response of the composite can be attained. To show the application of the approach, a twill woven polypropylene/glass reinforced thermoplastic composite laminate has been tested under 200 J impact energy and was modeled in Abaqus/Explicit via the built-in Hashin damage criteria. X-ray microtomography was used to investigate the presence of delamination inside the impacted sample. Finally, as a second and much simpler modeling approach it is shown that applying only a single correction factor over all material properties at once can still yield a reasonable prediction. Both advantages and limitations of the simplified modeling framework are addressed in the performed case study. PMID:25431787
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Neil; Jibben, Zechariah; Brady, Peter
2017-06-28
Pececillo is a proxy-app for the open source Truchas metal processing code (LA-CC-15-097). It implements many of the physics models used in Truchas: free-surface, incompressible Navier-Stokes fluid dynamics (e.g., water waves); heat transport, material phase change, view factor thermal radiation; species advection-diffusion; quasi-static, elastic/plastic solid mechanics with contact; electomagnetics (Maxwell's equations). The models are simplified versions that retain the fundamental computational complexity of the Truchas models while omitting many non-essential features and modeling capabilities. The purpose is to expose Truchas algorithms in a greatly simplified context where computer science problems related to parallel performance on advanced architectures can be moremore » easily investigated. While Pececillo is capable of performing simulations representative of typical Truchas metal casting, welding, and additive manufacturing simulations, it lacks many of the modeling capabilites needed for real applications.« less
Simplified radio-over-fiber transport systems with a low-cost multiband light source.
Chang, Ching-Hung; Peng, Peng-Chun; Lu, Hai-Han; Shih, Chine-Liang; Chen, Hwan-Wen
2010-12-01
In this Letter, low-cost radio-over-fiber (ROF) transport systems are proposed and experimentally demonstrated. By utilizing a laser diode (LD) and a local oscillator (LO) to generate coherent multiband optical carriers, as well as a self-composed wavelength selector to separate every two carriers for different ROF transport systems, no any other dedicated LD or electrical frequency upconverting circuit/process is needed in the central station (CS). Compared with current ROF systems, the required numbers of LDs, LOs, and mixers in a CS are significantly reduced. Reducing the number of components not only can simplify the network structure but can also reduce the volume and complexity of the relative logistics. To demonstrate the practice of the proposed ROF transport systems, clear eye diagrams and error-free transmission performance are experimentally presented.
A Comparison of Simplified Two-dimensional Flow Models Exemplified by Water Flow in a Cavern
NASA Astrophysics Data System (ADS)
Prybytak, Dzmitry; Zima, Piotr
2017-12-01
The paper shows the results of a comparison of simplified models describing a two-dimensional water flow in the example of a water flow through a straight channel sector with a cavern. The following models were tested: the two-dimensional potential flow model, the Stokes model and the Navier-Stokes model. In order to solve the first two, the boundary element method was employed, whereas to solve the Navier-Stokes equations, the open-source code library OpenFOAM was applied. The results of numerical solutions were compared with the results of measurements carried out on a test stand in a hydraulic laboratory. The measurements were taken with an ADV probe (Acoustic Doppler Velocimeter). Finally, differences between the results obtained from the mathematical models and the results of laboratory measurements were analysed.
NASA Astrophysics Data System (ADS)
Stewart, A. N.; Knoepp, J.; Miniat, C.; Oishi, A. C.; Emanuel, R. E.
2017-12-01
The development of accurate hydrologic models is key to describing changes in hydrologic processes due to land use and climate change. Hydrologic models typically simplify biological processes associated with plant water uptake and transpiration, assuming that roots take up water from the same moisture pool that feeds the stream; however, this assumption is not valid for all systems. Novel combinations of climate and forest composition and structure, caused by ecosystem succession, management decisions, and climate variability, will require a better understanding of sources of water for transpiration in order to accurately estimate impact on forest water yield. Here we examine red maple (Acer rubrum), rhododendron (Rhododendron maximum), tulip poplar (Liriodendron tulipifera), and white oak (Quercus alba) trees at Coweeta Hydrologic Laboratory, a long-term hydrological and ecological research site in western NC, USA, and explore whether source water use differs by species and landscape position. We analyzed stable isotopes of water (18O and 2H) in tree cores, stream water, soil water, and precipitation using laser spectrometry and compare the isotopic composition of the various pools. We place these results in broader context using meteorological and ecophysiological data collected nearby. These findings have implications for plant water stress and drought vulnerability. They also contribute to process-based knowledge of plant water use that better captures the sensitivity of transpiration to physical and biological controls at the sub-catchment scale. This work aims to help establish novel ways to model transpiration and improve understanding of water balance, biogeochemical cycling, and transport of nutrients to streams.
A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling.
Verberne, Anthony J M
2016-10-19
Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure.
NASA Technical Reports Server (NTRS)
Engebretson, M. J.; Mauersberger, K.
1979-01-01
The paper presents a simplified model of the ion source chemistry, explains several details of the data reduction method used in obtaining atomic-nitrogen (N) densities from OSS data, and discusses implications of gas-surface reactions for the design of future satellite-borne mass spectrometers. Because of various surface reactions, N appears in three different forms in the ion source, as N, NO, and NO2. Considering the rather small spin modulation of NO and NO2 in the semi-open ionization chamber used in the OSS instrument, it is not surprising that these reaction products have not been previously identified in closed source instruments as a measure of the presence of atomic nitrogen. Warmup and/or outgassing of the ion source are shown to drastically reduce the NO2 concentration, thereby making possible reliable measurement of ambient N densities.
Efficient source for the production of ultradense deuterium D(-1) for laser-induced fusion (ICF).
Andersson, Patrik U; Lönn, Benny; Holmlid, Leif
2011-01-01
A novel source which simplifies the study of ultradense deuterium D(-1) is now described. This means one step further toward deuterium fusion energy production. The source uses internal gas feed and D(-1) can now be studied without time-of-flight spectral overlap from the related dense phase D(1). The main aim here is to understand the material production parameters, and thus a relatively weak laser with focused intensity ≤10(12) W cm(-2) is employed for analyzing the D(-1) material. The properties of the D(-1) material at the source are studied as a function of laser focus position outside the emitter, deuterium gas feed, laser pulse repetition frequency and laser power, and temperature of the source. These parameters influence the D(-1) cluster size, the ionization mode, and the laser fragmentation patterns.
NASA Astrophysics Data System (ADS)
Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.
Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.
Self-similar regimes of turbulence in weakly coupled plasmas under compression
NASA Astrophysics Data System (ADS)
Viciconte, Giovanni; Gréa, Benoît-Joseph; Godeferd, Fabien S.
2018-02-01
Turbulence in weakly coupled plasmas under compression can experience a sudden dissipation of kinetic energy due to the abrupt growth of the viscosity coefficient governed by the temperature increase. We investigate in detail this phenomenon by considering a turbulent velocity field obeying the incompressible Navier-Stokes equations with a source term resulting from the mean velocity. The system can be simplified by a nonlinear change of variable, and then solved using both highly resolved direct numerical simulations and a spectral model based on the eddy-damped quasinormal Markovian closure. The model allows us to explore a wide range of initial Reynolds and compression numbers, beyond the reach of simulations, and thus permits us to evidence the presence of a nonlinear cascade phase. We find self-similarity of intermediate regimes as well as of the final decay of turbulence, and we demonstrate the importance of initial distribution of energy at large scales. This effect can explain the global sensitivity of the flow dynamics to initial conditions, which we also illustrate with simulations of compressed homogeneous isotropic turbulence and of imploding spherical turbulent layers relevant to inertial confinement fusion.
Xue, Song; He, Ning; Long, Zhiqiang
2012-01-01
The long stator track for high speed maglev trains has a tooth-slot structure. The sensor obtains precise relative position information for the traction system by detecting the long stator tooth-slot structure based on nondestructive detection technology. The magnetic field modeling of the sensor is a typical three-dimensional (3-D) electromagnetic problem with complex boundary conditions, and is studied semi-analytically in this paper. A second-order vector potential (SOVP) is introduced to simplify the vector field problem to a scalar field one, the solution of which can be expressed in terms of series expansions according to Multipole Theory (MT) and the New Equivalent Source (NES) method. The coefficients of the expansions are determined by the least squares method based on the boundary conditions. Then, the solution is compared to the simulation result through Finite Element Analysis (FEA). The comparison results show that the semi-analytical solution agrees approximately with the numerical solution. Finally, based on electromagnetic modeling, a difference coil structure is designed to improve the sensitivity and accuracy of the sensor.
Xue, Song; He, Ning; Long, Zhiqiang
2012-01-01
The long stator track for high speed maglev trains has a tooth-slot structure. The sensor obtains precise relative position information for the traction system by detecting the long stator tooth-slot structure based on nondestructive detection technology. The magnetic field modeling of the sensor is a typical three-dimensional (3-D) electromagnetic problem with complex boundary conditions, and is studied semi-analytically in this paper. A second-order vector potential (SOVP) is introduced to simplify the vector field problem to a scalar field one, the solution of which can be expressed in terms of series expansions according to Multipole Theory (MT) and the New Equivalent Source (NES) method. The coefficients of the expansions are determined by the least squares method based on the boundary conditions. Then, the solution is compared to the simulation result through Finite Element Analysis (FEA). The comparison results show that the semi-analytical solution agrees approximately with the numerical solution. Finally, based on electromagnetic modeling, a difference coil structure is designed to improve the sensitivity and accuracy of the sensor. PMID:22778652
Pharmit: interactive exploration of chemical space.
Sunseri, Jocelyn; Koes, David Ryan
2016-07-08
Pharmit (http://pharmit.csb.pitt.edu) provides an online, interactive environment for the virtual screening of large compound databases using pharmacophores, molecular shape and energy minimization. Users can import, create and edit virtual screening queries in an interactive browser-based interface. Queries are specified in terms of a pharmacophore, a spatial arrangement of the essential features of an interaction, and molecular shape. Search results can be further ranked and filtered using energy minimization. In addition to a number of pre-built databases of popular compound libraries, users may submit their own compound libraries for screening. Pharmit uses state-of-the-art sub-linear algorithms to provide interactive screening of millions of compounds. Queries typically take a few seconds to a few minutes depending on their complexity. This allows users to iteratively refine their search during a single session. The easy access to large chemical datasets provided by Pharmit simplifies and accelerates structure-based drug design. Pharmit is available under a dual BSD/GPL open-source license. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingargiola, A.; Laurence, T. A.; Boutelle, R.
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less
Photon migration in non-scattering tissue and the effects on image reconstruction
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.; Arridge, S. R.
1999-12-01
Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.
A Laminar Model for the Magnetic Field Structure in Bow-Shock Pulsar Wind Nebulae
NASA Astrophysics Data System (ADS)
Bucciantini, N.
2018-05-01
Bow Shock Pulsar Wind Nebulae are a class of non-thermal sources, that form when the wind of a pulsar moving at supersonic speed interacts with the ambient medium, either the ISM or in a few cases the cold ejecta of the parent supernova. These systems have attracted attention in recent years, because they allow us to investigate the properties of the pulsar wind in a different environment from that of canonical Pulsar Wind Nebulae in Supernova Remnants. However, due to the complexity of the interaction, a full-fledged multidimensional analysis is still laking. We present here a simplified approach, based on Lagrangian tracers, to model the magnetic field structure in these systems, and use it to compute the magnetic field geometry, for various configurations in terms of relative orientation of the magnetic axis, pulsar speed and observer direction. Based on our solutions we have computed a set of radio emission maps, including polarization, to investigate the variety of possible appearances, and how the observed emission pattern can be used to constrain the orientation of the system, and the possible presence of turbulence.
A Process for the Critical Analysis of Instructional Theory
ERIC Educational Resources Information Center
Bostwick, Jay A.; Calvert, Isaac Wade; Francis, Jenifer; Hawkley, Melissa; Henrie, Curtis R.; Hyatt, Frederick R.; Juncker, Janeel; Gibbons, Andrew S.
2014-01-01
Some have argued for a common language in the field of instructional design in an effort to reduce misunderstandings and simplify a multitude of synonymous terms and concepts. Others feel that this goal is undesirable in that it precludes development and flexibility. In this article we propose an ontology-building process as a way for readers to…
48 CFR 52.213-4 - Terms and Conditions-Simplified Acquisitions (Other Than Commercial Items)
Code of Federal Regulations, 2012 CFR
2012-10-01
... over $15,000 in the United States, Puerto Rico, or the U.S. Virgin Islands). (iv) 52.222-35, Equal... Columbia, Puerto Rico, the Northern Mariana Islands, American Samoa, Guam, the U.S. Virgin Islands, and... will be performed in the United States, District of Columbia, Puerto Rico, the Northern Mariana Islands...
48 CFR 52.213-4 - Terms and Conditions-Simplified Acquisitions (Other Than Commercial Items)
Code of Federal Regulations, 2014 CFR
2014-10-01
... States, Puerto Rico, or the U.S. Virgin Islands). (iv) 52.222-35, Equal Opportunity for Veterans (July... purposes of this clause, “United States” includes the 50 States, the District of Columbia, Puerto Rico, the... in the United States, District of Columbia, Puerto Rico, the Northern Mariana Islands, American Samoa...
ERIC Educational Resources Information Center
Baer, Susan; Garland, E. Jane
2005-01-01
Objective: A pilot study to evaluate the efficacy of a cognitive-behavioral group therapy program for adolescents with social phobia, simplified both in terms of time and labor intensity from a previously studied program (Social Effectiveness Therapy for Children and Adolescents) to be more appropriate for a community outpatient psychiatric…
USDA-ARS?s Scientific Manuscript database
Accurately estimating consumptive water use in the Colorado River Basin (CRB) is important for assessing and managing limited water resources in the basin. Increasing water demand from various sectors may threaten long-term sustainability of the water supply in the arid southwestern United States. L...
48 CFR 52.213-4 - Terms and Conditions-Simplified Acquisitions (Other Than Commercial Items)
Code of Federal Regulations, 2010 CFR
2010-10-01
...-consuming products listed in the ENERGY STAR® Program or Federal Energy Management Program (FEMP) will be... exceeds the micro-purchase threshold and the acquisition— (A) Is set aside for small business concerns; or (B) Cannot be set aside for small business concerns (see 19.502-2), and does not exceed $25,000.) (x...
2012-05-01
and to gauge how changes in business and consumer behavior may affect multipliers. In the simplified forms of such models, increases in government...or rule-of-thumb consumers). Research on consumer behavior has found that the spending of some households tends to vary one-for-one with income
Wildland fire in ecosystems: effects of fire on fauna
Jane Kapler Smith
2000-01-01
VOLUME 1: Fires affect animals mainly through effects on their habitat. Fires often cause short-term increases in wildlife foods that contribute to increases in populations of some animals. These increases are moderated by the animals' ability to thrive in the altered, often simplified, structure of the postfire environment. The extent of fire effects on animal...
Microcomputer software for calculating the western Oregon elk habitat effectiveness index.
Alan Ager; Mark Hitchcock
1992-01-01
This paper describes the operation of the microcomputer program HEIWEST, which was developed to automate calculation of the western Oregon elk habitat effectiveness index (HEI). HEIWEST requires little or no training to operate and vastly simplifies the task of measuring HEI for either site-specific project analysis or long-term monitoring of elk habitat. It is...
DOE Office of Scientific and Technical Information (OSTI.GOV)
This report presents a cold-climate project that examines an alternative approach to ground source heat pump (GSHP) ground loop design. The innovative ground loop design is an attempt to reduce the installed cost of the ground loop heat exchange portion of the system by containing the entire ground loop within the excavated location beneath the basement slab.
Estimating Economic and Logistic Utility of Connecting to Unreliable Power Grids
2016-06-17
the most unreliable host nation grids almost always have a higher availability than solar photovoltaics ( PV ), which for most parts of the world will...like solar , and still design a facility energy architecture that benefits from that source when available. Index Terms—facilities management, energy...Maintenance PV Photovoltaic SAIDI System Average Interruption Duration Index SAIFI System Average Interruption Frequency Index SHP Simplified Host
Simplified Rotation In Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.
1989-01-01
New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.
Yang, Defu; Chen, Xueli; Peng, Zhen; Wang, Xiaorui; Ripoll, Jorge; Wang, Jing; Liang, Jimin
2013-01-01
Modeling light propagation in the whole body is essential and necessary for optical imaging. However, non-scattering, low-scattering and high absorption regions commonly exist in biological tissues, which lead to inaccuracy of the existing light transport models. In this paper, a novel hybrid light transport model that couples the simplified spherical harmonics approximation (SPN) with the radiosity theory (HSRM) was presented, to accurately describe light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. In the model, the radiosity theory was used to characterize the light transport in non-scattering regions and the SPN was employed to handle the scattering problems, including subsets of low-scattering and high absorption. A Neumann source constructed by the light transport in the non-scattering region and formed at the interface between the non-scattering and scattering regions was superposed into the original light source, to couple the SPN with the radiosity theory. The accuracy and effectiveness of the HSRM was first verified with both regular and digital mouse model based simulations and a physical phantom based experiment. The feasibility and applicability of the HSRM was then investigated by a broad range of optical properties. Lastly, the influence of depth of the light source on the model was also discussed. Primary results showed that the proposed model provided high performance for light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. PMID:24156077
Yang, Defu; Chen, Xueli; Peng, Zhen; Wang, Xiaorui; Ripoll, Jorge; Wang, Jing; Liang, Jimin
2013-01-01
Modeling light propagation in the whole body is essential and necessary for optical imaging. However, non-scattering, low-scattering and high absorption regions commonly exist in biological tissues, which lead to inaccuracy of the existing light transport models. In this paper, a novel hybrid light transport model that couples the simplified spherical harmonics approximation (SPN) with the radiosity theory (HSRM) was presented, to accurately describe light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. In the model, the radiosity theory was used to characterize the light transport in non-scattering regions and the SPN was employed to handle the scattering problems, including subsets of low-scattering and high absorption. A Neumann source constructed by the light transport in the non-scattering region and formed at the interface between the non-scattering and scattering regions was superposed into the original light source, to couple the SPN with the radiosity theory. The accuracy and effectiveness of the HSRM was first verified with both regular and digital mouse model based simulations and a physical phantom based experiment. The feasibility and applicability of the HSRM was then investigated by a broad range of optical properties. Lastly, the influence of depth of the light source on the model was also discussed. Primary results showed that the proposed model provided high performance for light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities.
Bu, Laju; Hu, Mengxing; Lu, Wanlong; Wang, Ziyu; Lu, Guanghao
2018-01-01
Source-semiconductor-drain coplanar transistors with an organic semiconductor layer located within the same plane of source/drain electrodes are attractive for next-generation electronics, because they could be used to reduce material consumption, minimize parasitic leakage current, avoid cross-talk among different devices, and simplify the fabrication process of circuits. Here, a one-step, drop-casting-like printing method to realize a coplanar transistor using a model semiconductor/insulator [poly(3-hexylthiophene) (P3HT)/polystyrene (PS)] blend is developed. By manipulating the solution dewetting dynamics on the metal electrode and SiO 2 dielectric, the solution within the channel region is selectively confined, and thus make the top surface of source/drain electrodes completely free of polymers. Subsequently, during solvent evaporation, vertical phase separation between P3HT and PS leads to a semiconductor-insulator bilayer structure, contributing to an improved transistor performance. Moreover, this coplanar transistor with semiconductor-insulator bilayer structure is an ideal system for injecting charges into the insulator via gate-stress, and the thus-formed PS electret layer acts as a "nonuniform floating gate" to tune the threshold voltage and effective mobility of the transistors. Effective field-effect mobility higher than 1 cm 2 V -1 s -1 with an on/off ratio > 10 7 is realized, and the performances are comparable to those of commercial amorphous silicon transistors. This coplanar transistor simplifies the fabrication process of corresponding circuits. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Finch, Warren Irvin
1997-01-01
The many aspects of uranium, a heavy radioactive metal used to generate electricity throughout the world, are briefly described in relatively simple terms intended for the lay reader. An adequate glossary of unfamiliar terms is given. Uranium is a new source of electrical energy developed since 1950, and how we harness energy from it is explained. It competes with the organic coal, oil, and gas fuels as shown graphically. Uranium resources and production for the world are tabulated and discussed by country and for various energy regions in the United States. Locations of major uranium deposits and power reactors in the United States are mapped. The nuclear fuel-cycle of uranium for a typical light-water reactor is illustrated at the front end-beginning with its natural geologic occurrence in rocks through discovery, mining, and milling; separation of the scarce isotope U-235, its enrichment, and manufacture into fuel rods for power reactors to generate electricity-and at the back end-the reprocessing and handling of the spent fuel. Environmental concerns with the entire fuel cycle are addressed. The future of the use of uranium in new, simplified, 'passively safe' reactors for the utility industry is examined. The present resource assessment of uranium in the United States is out of date, and a new assessment could aid the domestic uranium industry.
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
NASA Astrophysics Data System (ADS)
Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.
2016-11-01
There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.
GeoDataspaces: Simplifying Data Management Tasks with Globus
NASA Astrophysics Data System (ADS)
Malik, T.; Chard, K.; Tchoua, R. B.; Foster, I.
2014-12-01
Data and its management are central to modern scientific enterprise. Typically, geoscientists rely on observations and model output data from several disparate sources (file systems, RDBMS, spreadsheets, remote data sources). Integrated data management solutions that provide intuitive semantics and uniform interfaces, irrespective of the kind of data source are, however, lacking. Consequently, geoscientists are left to conduct low-level and time-consuming data management tasks, individually, and repeatedly for discovering each data source, often resulting in errors in handling. In this talk we will describe how the EarthCube GeoDataspace project is improving this situation for seismologists, hydrologists, and space scientists by simplifying some of the existing data management tasks that arise when developing computational models. We will demonstrate a GeoDataspace, bootstrapped with "geounits", which are self-contained metadata packages that provide complete description of all data elements associated with a model run, including input/output and parameter files, model executable and any associated libraries. Geounits link raw and derived data as well as associating provenance information describing how data was derived. We will discuss challenges in establishing geounits and describe machine learning and human annotation approaches that can be used for extracting and associating ad hoc and unstructured scientific metadata hidden in binary formats with data resources and models. We will show how geounits can improve search and discoverability of data associated with model runs. To support this model, we will describe efforts related towards creating a scalable metadata catalog that helps to maintain, search and discover geounits within the Globus network of accessible endpoints. This talk will focus on the issue of creating comprehensive personal inventories of data assets for computational geoscientists, and describe a publishing mechanism, which can be used to feed into national, international, or thematic discovery portals.
Simplified tools for evaluating domestic ventilation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maansson, L.G.; Orme, M.
1999-07-01
Within an International Energy Agency (IEA) project, Annex 27, experts from 8 countries (Canada, France, Italy, Japan, The Netherlands, Sweden, UK and USA) have developed simplified tools for evaluating domestic ventilation systems during the heating season. Tools for building and user aspects, thermal comfort, noise, energy, life cycle cost, reliability and indoor air quality (IAQ) have been devised. The results can be used both for dwellings at the design stage and after construction. The tools lead to immediate answers and indications about the consequences of different choices that may arise during discussion with clients. This paper presents an introduction tomore » these tools. Examples applications of the indoor air quality and energy simplified tools are also provided. The IAQ tool accounts for constant emission sources, CO{sub 2}, cooking products, tobacco smoke, condensation risks, humidity levels (i.e., for judging the risk for mould and house dust mites), and pressure difference (for identifying the risk for radon or land fill spillage entering the dwelling or problems with indoor combustion appliances). An elaborated set of design parameters were worked out that resulted in about 17,000 combinations. By using multi-variate analysis it was possible to reduce this to 174 combinations for IAQ. In addition, a sensitivity analysis was made using 990 combinations. The results from all the runs were used to develop a simplified tool, as well as quantifying equations relying on the design parameters. A computerized energy tool has also been developed within this project, which takes into account air tightness, climate, window airing pattern, outdoor air flow rate and heat exchange efficiency.« less
Some finite terms from ladder diagrams in three and four loop maximal supergravity
NASA Astrophysics Data System (ADS)
Basu, Anirban
2015-10-01
We consider the finite part of the leading local interactions in the low energy expansion of the four graviton amplitude from the ladder skeleton diagrams in maximal supergravity on T 2, at three and four loops. At three loops, we express the {D}8{{R}}4 and {D}10{{R}}4 amplitudes as integrals over the moduli space of an underlying auxiliary geometry. These amplitudes are evaluated exactly for special values of the the moduli of the auxiliary geometry, where the integrand simplifies. We also perform a similar analysis for the {D}8{{R}}4 amplitude at four loops that arise from the ladder skeleton diagrams for a special value of a parameter in the moduli space of the auxiliary geometry. While the dependence of the amplitudes on the volume of the T 2 is very simple, the dependence on the complex structure of the T 2 is quite intricate. In some of the cases, the amplitude consists of terms each of which factorizes into a product of two {SL}(2,{{Z}}) invariant modular forms. While one of the factors is a non-holomorphic Eisenstein series, the other factor splits into a sum of modular forms each of which satisfies a Poisson equation on moduli space with source terms that are bilinear in the Eisenstein series. This leads to several possible perturbative contributions unto genus 5 in type II string theory on S1. Unlike the one and two loop supergravity analysis, these amplitudes also receive non-perturbative contributions from bound states of three D-(anti)instantons in the IIB theory.
Post-reionization Kinetic Sunyaev-Zel'dovich Signal in the Illustris simulation
NASA Astrophysics Data System (ADS)
Park, Hyunbae; Alvarez, Marcelo A.; Bond, John Richard
2017-06-01
Using Illustris, a state-of-art cosmological simulation of gravity, hydrodynamics, and star-formation, we revisit the calculation the angular power spectrum of the kinetic Sunyaev-Zel'dovich effect from the post-reionization (z < 6) epoch by Shaw et al. (2012). We not only report the updated value given by the analytical model used in previous studies, but go over the simplifying assumptions made in the model. The assumptions include using gas density for free electron density and neglecting the connected term arising due to the fourth order nature of momentum power spectrum that sources the signal. With these assumptions, Illustris gives slightly (˜ 10%) larger signal than in their work. Then, the signal is reduced by ˜ 20% when using actual free electron density in the calculation instead of gas density. This is because larger neutral fraction in dense regions results in loss of total free electron and suppression of fluctuations in free electron density. We find that the connected term can take up to half of the momentum power spectrum at z < 2. Due to a strong suppression of low-z signal by baryonic physics, the extra contribution from the connected term to ˜ 10% level although it may have been underestimated due to the finite box-size of Illustris. With these corrections, our result is very close to the original result of Shaw et al. (2012), which is well described by a simple power-law, D_l = 1.38[l/3000]0.21 μK^2, at 3000 < l < 10000.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Burton, Paul W.
2010-09-01
The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.
NASA Astrophysics Data System (ADS)
Brereton, Carol A.; Johnson, Matthew R.
2012-05-01
Fugitive pollutant sources from the oil and gas industry are typically quite difficult to find within industrial plants and refineries, yet they are a significant contributor of global greenhouse gas emissions. A novel approach for locating fugitive emission sources using computationally efficient trajectory statistical methods (TSM) has been investigated in detailed proof-of-concept simulations. Four TSMs were examined in a variety of source emissions scenarios developed using transient CFD simulations on the simplified geometry of an actual gas plant: potential source contribution function (PSCF), concentration weighted trajectory (CWT), residence time weighted concentration (RTWC), and quantitative transport bias analysis (QTBA). Quantitative comparisons were made using a correlation measure based on search area from the source(s). PSCF, CWT and RTWC could all distinguish areas near major sources from the surroundings. QTBA successfully located sources in only some cases, even when provided with a large data set. RTWC, given sufficient domain trajectory coverage, distinguished source areas best, but otherwise could produce false source predictions. Using RTWC in conjunction with CWT could overcome this issue as well as reduce sensitivity to noise in the data. The results demonstrate that TSMs are a promising approach for identifying fugitive emissions sources within complex facility geometries.
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-01-01
Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-03-09
Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.
Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S
2013-01-01
Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.
Teaching the English Active and Passive Voice with the Help of Cognitive Grammar: An Empirical Study
ERIC Educational Resources Information Center
Bielak, Jakub; Pawlak, Mirosuaw; Mystkowska-Wiertelak, Anna
2013-01-01
Functionally-oriented linguistic theories, such as cognitive grammar (CG), offer nuanced descriptions of the meanings and uses of grammatical features. A simplified characterization of the semantics of the English active and passive voice grounded in CG terms and based on the reference point model is presented, as it is the basis of the…
USDA-ARS?s Scientific Manuscript database
Better understanding of vulnerability of coastal habitats to sea level rise and major storm events require the use of simulation models. Coastal habitats also undergo frequent nourishment restoration works in order to maintain their viability. Vulnerability models must be able to assess the combined...
Verma, Radhika; Singh, Udai Pratap; Tyagi, Shashi Prabha; Nagpal, Rajni; Manuja, Naveen
2013-01-01
Objective: To evaluate the effect of 2% chlorhexidine (CHX) and 30% proanthocyanidin (PA) application on the immediate and long-term bond strength of simplified etch-and-rinse adhesives to dentin. Materials and Methods: One hundred twenty extracted human molar teeth were ground to expose the flat dentin surface. The teeth were equally divided into six groups according to the adhesives used, either Tetric N Bond or Solobond M and pretreatments given either none, CHX, or PA. Composite cylinder was bonded to each specimen using the respective adhesive technique. Half the samples from each group (n = 10) were then tested immediately. The remaining samples were tested after 6 month storage in distilled water. Results: The mean bond strength of samples was not significantly different upon immediate testing being in the range of 8.4(±0.7) MPa. The bond strength fell dramatically in the control specimens after 6 month storage to around 4.7(±0.33) MPa, while the bond strength was maintained in the samples treated with both CHX and PA. Conclusion: Thirty percent PA was comparable to 2% CHX with respect to preservation of the resin dentin bond over 6 months. PMID:23956543
Long-term evaluation of orbital dynamics in the Sun-planet system considering axial-tilt
NASA Astrophysics Data System (ADS)
Bakhtiari, Majid; Daneshjou, Kamran
2018-05-01
In this paper, the axial-tilt (obliquity) effect of planets on the motion of planets’ orbiter in prolonged space missions has been investigated in the presence of the Sun gravity. The proposed model is based on non-simplified perturbed dynamic equations of planetary orbiter motion. From a new point of view, in this work, the dynamic equations regarding a disturbing body in elliptic inclined three-dimensional orbit are derived. The accuracy of this non-simplified method is validated with dual-averaged method employed on a generalized Earth-Moon system. It is shown that the neglected short-time oscillations in dual-averaged technique can accumulate and propel to remarkable errors in the prolonged evolution. After validation, the effects of the planet’s axial-tilt on eccentricity, inclination and right ascension of the ascending node of the orbiter are investigated. Moreover, a generalized model is provided to study the effects of third-body inclination and eccentricity on orbit characteristics. It is shown that the planet’s axial-tilt is the key to facilitating some significant changes in orbital elements in long-term mission and short-time oscillations must be considered in accurate prolonged evaluation.
Characterizing a Wake-Free Safe Zone for the Simplified Aircraft-Based Paired Approach Concept
NASA Technical Reports Server (NTRS)
Guerreiro, Nelson M.; Neitzke, Kurt W.; Johnson, Sally C.; Stough, H. Paul, III; McKissick, Burnell T.; Syed, Hazari I.
2010-01-01
The Federal Aviation Administration (FAA) has proposed a concept of operations geared towards achieving increased arrival throughput at U.S. Airports, known as the Simplified Aircraft-based Paired Approach (SAPA) concept. In this study, a preliminary characterization of a wake-free safe zone (WFSZ) for the SAPA concept has been performed. The experiment employed Monte-Carlo simulations of varying approach profiles by aircraft pairs to closely-spaced parallel runways. Three different runway lateral spacings were investigated (750 ft, 1000 ft and 1400 ft), along with no stagger and 1500 ft stagger between runway thresholds. The paired aircraft were flown in a leader/trailer configuration with potential wake encounters detected using a wake detection surface translating with the trailing aircraft. The WFSZ is characterized in terms of the smallest observed initial in-trail distance leading to a wake encounter anywhere along the approach path of the aircraft. The results suggest that the WFSZ can be characterized in terms of two primary altitude regions, in ground-effect (IGE) and out of ground-effect (OGE), with the IGE region being the limiting case with a significantly smaller WFSZ. Runway stagger was observed to only modestly reduce the WFSZ size, predominantly in the OGE region.
Jonathan Thompson; Kelly Burnett
2008-01-01
Not all landslides are created equal. Some have the potential to run out to streams and others do not. Some are likely to simplify and damage stream habitat, and others can be important sources of gravel and large wood, fundamental components of habitat complexity for salmon and other stream inhabitants. Forest managers want to avoid negative consequences and promote...
Intensity and absorbed-power distribution in a cylindrical solar-pumped dye laser
NASA Technical Reports Server (NTRS)
Williams, M. D.
1984-01-01
The internal intensity and absorbed-power distribution of a simplified hypothetical dye laser of cylindrical geometry is calculated. Total absorbed power is also calculated and compared with laboratory measurements of lasing-threshold energy deposition in a dye cell to determine the suitability of solar radiation as a pump source or, alternatively, what modifications, if any, are necessary to the hypothetical system for solar pumping.
Testing biological liquid samples using modified m-line spectroscopy method
NASA Astrophysics Data System (ADS)
Augusciuk, Elzbieta; Rybiński, Grzegorz
2005-09-01
Non-chemical method of detection of sugar concentration in biological (animal and plant source) liquids has been investigated. Simplified set was build to show the easy way of carrying out the survey and to make easy to gather multiple measurements for error detecting and statistics. Method is suggested as easy and cheap alternative for chemical methods of measuring sugar concentration, but needing a lot effort to be made precise.
Simplified Daylight Spectrum Approximation by Blending Two Light Emitting Diode Sources
2012-03-01
Iota Epsilon (SIE). Michael E. Miller, PhD is an Assistant Professor of Human Systems Integration at the Air Force Institute of Technology. His...USA. Dr Grimaila’s research interests include mission assurance, network management 49 and security , quantum information warfare, and systems...Engineers (SAME) and Sigma Iota Epsilon (SIE). John Colombi, Ph.D. is an Assistant Professor of Systems Engineering at the Air Force Institute of
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
An architecture for genomics analysis in a clinical setting using Galaxy and Docker
Digan, W; Countouris, H; Barritault, M; Baudoin, D; Laurent-Puig, P; Blons, H; Burgun, A
2017-01-01
Abstract Next-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. PMID:29048555
An architecture for genomics analysis in a clinical setting using Galaxy and Docker.
Digan, W; Countouris, H; Barritault, M; Baudoin, D; Laurent-Puig, P; Blons, H; Burgun, A; Rance, B
2017-11-01
Next-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. © The Author 2017. Published by Oxford University Press.
O'Connell, Timothy; Chang, Debra
2012-01-01
While on call, radiology residents review imaging studies and issue preliminary reports to referring clinicians. In the absence of an integrated reporting system at the training sites of the authors' institution, residents were typing and faxing preliminary reports. To partially automate the on-call resident workflow, a Web-based system for resident reporting was developed by using the free open-source xAMP Web application framework and an open-source DICOM (Digital Imaging and Communications in Medicine) software toolkit, with the goals of reducing errors and lowering barriers to education. This reporting system integrates with the picture archiving and communication system to display a worklist of studies. Patient data are automatically entered in the preliminary report to prevent identification errors and simplify the report creation process. When the final report for a resident's on-call study is available, the reporting system queries the report broker for the final report, and then displays the preliminary report side by side with the final report, thus simplifying the review process and encouraging review of all of the resident's reports. The xAMP Web application framework should be considered for development of radiology department informatics projects owing to its zero cost, minimal hardware requirements, ease of programming, and large support community.
A Simple Picaxe Microcontroller Pulse Source for Juxtacellular Neuronal Labelling †
Verberne, Anthony J. M.
2016-01-01
Juxtacellular neuronal labelling is a method which allows neurophysiologists to fill physiologically-identified neurons with small positively-charged marker molecules. Labelled neurons are identified by histochemical processing of brain sections along with immunohistochemical identification of neuropeptides, neurotransmitters, neurotransmitter transporters or biosynthetic enzymes. A microcontroller-based pulser circuit and associated BASIC software script is described for incorporation into the design of a commercially-available intracellular electrometer for use in juxtacellular neuronal labelling. Printed circuit board construction has been used for reliability and reproducibility. The current design obviates the need for a separate digital pulse source and simplifies the juxtacellular neuronal labelling procedure. PMID:28952589
NASA Technical Reports Server (NTRS)
Banger, Kulbinder K.; Jin, Michael H. C.; Harris, Jerry D.; Fanwick, Philip E.; Hepp, Aloysius F.
2004-01-01
We report a new simplified synthetic procedure for commercial manufacture of ternary single source precursors (SSP). This new synthetic process has been successfully implemented to fabricate known SSPs on bulk scale and the first liquid SSPs to the semiconductors CuInSe2 and AgIn(x)S(y). Single crystal X-ray determination reveals the first unsolvated ternary AgInS SSP. SSPs prepared via this new route have successfully been used in a spray assisted chemical vapor deposition (CVD) process to deposit polycrystalline thin films, and for preparing ternary nanocrystallites.
Mie Scattering of Growing Molecular Contaminants
NASA Technical Reports Server (NTRS)
Herren, Kenneth A.; Gregory, Don A.
2007-01-01
Molecular contamination of optical surfaces from outgassed material has been shown in many cases to proceed from acclimation centers and to produce many roughly hemispherical "islands" of contamination on the surface. The mathematics of the hemispherical scattering is simplified by introducing a Virtual source below the plane of the optic, in this case a mirror, allowing the use of Mie theory to produce a solution for the resulting sphere .in transmission. Experimentally, a fixed wavelength in the vacuum ultraviolet was used as the illumination source and scattered light from the polished and coated glass mirrors was detected at a fixed angle as the contamination islands grew in time.
A simplifying feature of the heterotic one loop four graviton amplitude
NASA Astrophysics Data System (ADS)
Basu, Anirban
2018-01-01
We show that the weight four modular graph functions that contribute to the integrand of the t8t8D4R4 term at one loop in heterotic string theory do not require regularization, and hence the integrand is simple. This is unlike the graphs that contribute to the integrands of the other gravitational terms at this order in the low momentum expansion, and these integrands require regularization. This property persists for an infinite number of terms in the effective action, and their integrands do not require regularization. We find non-trivial relations between weight four graphs of distinct topologies that do not require regularization by performing trivial manipulations using auxiliary diagrams.
Sun, Xiaodong; Fang, Dawei; Zhang, Dong; Ma, Qingyu
2013-05-01
Different from the theory of acoustic monopole spherical radiation, the acoustic dipole radiation based theory introduces the radiation pattern of Lorentz force induced dipole sources to describe the principle of magnetoacoustic tomography with magnetic induction (MAT-MI). Although two-dimensional (2D) simulations have been studied for cylindrical phantom models, layer effects of the dipole sources within the entire object along the z direction still need to be investigated to evaluate the performance of MAT-MI for different geometric specifications. The purpose of this work is further verifying the validity and generality of acoustic dipole radiation based theory for MAT-MI with two new models in different shapes, dimensions, and conductivities. Based on the theory of acoustic dipole radiation, the principles of MAT-MI were analyzed with derived analytic formulae. 2D and 3D numerical studies for two new models of aluminum foil and cooked egg were conducted to simulate acoustic pressures and corresponding waveforms, and 2D images of the scanned layers were reconstructed with the simplified back projection algorithm for the waveforms collected around the models. The spatial resolution for conductivity boundary differentiation was also analyzed with different foil thickness. For comparison, two experimental measurements were conducted for a cylindrical aluminum foil phantom and a shell-peeled cooked egg. The collected waveforms and the reconstructed images of the scanned layers were achieved to verify the validation of the acoustic dipole radiation based theory for MAT-MI. Despite the difference between the 2D and 3D simulated pressures, good consistence of the collected waveforms proves that wave clusters are generated by the abrupt pressure changes with bipolar vibration phases, representing the opposite polarities of the conductivity changes along the measurement direction. The configuration of the scanned layer can be reconstructed in terms of shape and size, and the conductivity boundaries are displayed in stripes with different contrast and bipolar intensities. Layer effects are demonstrated to have little influence on the collected waveforms and the reconstructed images of the scanned layers for the two new models. The experimental results have good agreements with numerical simulations, and the reconstructed 2D images provide conductivity configurations in the scanned layers of the aluminum foil and the egg models. It can be concluded that the acoustic pressure of MAT-MI is produced by the divergence of the induced Lorentz force, and the collected waveforms comprise wave clusters with bipolar vibration phases and different amplitudes, providing the information of conductivity boundaries in the scanned layer. With the simplified back projection algorithm for diffraction sources, collected waveforms can be used to reconstruct 2D conductivity contrast image and the conductivity configuration in the scanned layer can be obtained in terms of shape and size in stripes with the spatial resolution of the acoustic wavelength. The favorable results further verify the validity and generality of the acoustic dipole radiation based theory and suggest the feasibility of MAT-MI as an effective electrical impedance contrast imaging approach for medical imaging.
Allocation of DSST in the New implementation of Tastrodyweb Tools Web-site
NASA Astrophysics Data System (ADS)
San Juan, J. F.; Lara, M.; López, R.; López, L. M.; Weeden, B.; Cefola, P. J.
2012-09-01
The Draper Semianalytic Satellite Theory (DSST) is a semianalytic orbit propagator, which was carried out on Fortran to run from a command line interface. The construction of DSST began at the Computer Sciences Corporation and continued at the Draper Laboratory in the late 1970's and early 1980's. There are two versions of this application. One of them can be found as an option within the Goddard Trajectory Determination System (GTDS), whereas the other is available as a Standalone Orbit Propagator Package. Both versions are constantly evolving and updating. This constant evolution and updating allows DSST to take into account a wide variety of perturbation forces, which can be selected by means of a non-trivial options system at run time, and makes DSST a useful tool for performing short-term high accuracy orbit determination as well as long-term evolution. DSST has been included as part of an open source project for Space Situational Awareness and space object catalog work. On the last IAC 2011 a first step was taken in this sense and DSST was included on the tastrody Web-Site prototype [3, 4], which provided DSST with a friendly web interface, thus simplifying its use for both expert and non-expert users. However, this prototype has evolved into a stable platform based on the Drupal open source content management system (http://drupal.org Drupal), which simplifies the integration of our own application server. Drupal is supported by a large group of developers and users. Furthermore, a significant number of web-sites have been created using Drupal. In this work we present the integration of DSST in the new web-site, the new facilities provide by this platform to create the research community based on DSST and the comparison tests between the GTDS DSST, DSST Standalone and DSST Web version. These tests will be available in order to facilitate the user with better understanding of DSST. REFERENCES [1] J. G. Neelon, P. J. Cefola, and R. J. Proulx, Current Development of the Draper Semianalytical Satellite Theory Standalone Orbit Propagator Package", AAS Pre-print 97-731, presented at the AAS/AIAA Astrodynamics Conference, Sun Valley, ID, August 1997. [2] P. J. Cefola, D. Phillion, and K. S. Kim, Improving Access to the Semi-Analytical Satellite Theory, AAS 09-341, presented at the AAS/AIAA Astrodynamic Specialist Conference, Pittsburgh, PA, August 2009. [3] P. J. Cefola, B. Weeden and C. Levit, Open Source Software Suite for Space Situational Awareness and Space Object Catalog Work, 4th International Conference on Astrodynamics Tools Techniques, Madrid, Spain, 3-6 May 2010. [4] J. F. San Juan, R. López and I. Pérez, Nonlinear Dynamics Web Tools, 4th International Conference on Astrodynamics Tools Techniques, Madrid, Spain, 3-6 May 2010. [5] J. F. San Juan, M. Lara, R. López. L. M. López, B. Weeden and P. J. Cefola, Using the DSST Semi-Analytical Orbit. Propagator Package via the NondyWebTools/. AstrodyWebTools. Proceedings of 62nd International Astronautical Congress, Cape Town, SA., 2011.
SIMPLIFIED PRACTICAL TEST METHOD FOR PORTABLE DOSE METERS USING SEVERAL SEALED RADIOACTIVE SOURCES.
Mikamoto, Takahiro; Yamada, Takahiro; Kurosawa, Tadahiro
2016-09-01
Sealed radioactive sources which have small activity were employed for the determination of response and tests for non-linearity and energy dependence of detector responses. Close source-to-detector geometry (at 0.3 m or less) was employed to practical tests for portable dose meters to accumulate statistically sufficient ionizing currents. Difference between response in the present experimentally studied field and in the reference field complied with ISO 4037 due to non-uniformity of radiation fluence at close geometry was corrected by use of Monte Carlo simulation. As a consequence, corrected results were consistent with the results obtained in the ISO 4037 reference field within their uncertainties. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, E.R.C. da; Filho, B.J.C.
This paper presents a PWM current clamping circuit for improving a series resonant DC link converter. This circuit is capable of reducing current peaks to about 1.2--1.4 times the DC bias current. When desired, resonant transition creates notches in the dc link current, allowing the converter`s switches to synchronize with external PWM strategy. A regulated DC current source may be obtained--by using a conventional rectifier source--to feed a DC load or a current source inverter. Phase plane approach makes ease the understanding the operation, control and design procedure of the circuit. Another topology is derived and its features compared tomore » the first circuit. Simulation results for the simplified circuit and for a three-phase induction motor driven by such inverter will be presented. Moreover, the principle is corroborated by experimental results.« less
Development of electron beam ion source for nanoprocess using highly charged ions
NASA Astrophysics Data System (ADS)
Sakurai, Makoto; Nakajima, Fumiharu; Fukumoto, Takunori; Nakamura, Nobuyuki; Ohtani, Shunsuke; Mashiko, Shinro; Sakaue, Hiroyuki
2005-07-01
Highly charged ion is useful to produce nanostructure on various materials, and is key tool to realize single ion implantation technique. On such demands for the application to nanotechnology, we have designed an electron bean ion source. The design stresses on the volume of drift tubes where highly charged ions are confined and the efficiency of ion extraction from the drift tube through collector electrode in order to obtain intense ion beam as much as possible. The ion source uses a discrete superconducting magnet cooled by a closed-cycle refrigerator in order to reduce the running costs and to simplify the operating procedures. The electrodes of electron gun, drift tubes, and collector are enclosed in ultrahigh vacuum tube that is inserted into the bore of the magnet system.
Open Source Clinical NLP - More than Any Single System.
Masanz, James; Pakhomov, Serguei V; Xu, Hua; Wu, Stephen T; Chute, Christopher G; Liu, Hongfang
2014-01-01
The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP's mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice.
Seismic hazard in the Nation's breadbasket
Boyd, Oliver; Haller, Kathleen; Luco, Nicolas; Moschetti, Morgan P.; Mueller, Charles; Petersen, Mark D.; Rezaeian, Sanaz; Rubinstein, Justin L.
2015-01-01
The USGS National Seismic Hazard Maps were updated in 2014 and included several important changes for the central United States (CUS). Background seismicity sources were improved using a new moment-magnitude-based catalog; a new adaptive, nearest-neighbor smoothing kernel was implemented; and maximum magnitudes for background sources were updated. Areal source zones developed by the Central and Eastern United States Seismic Source Characterization for Nuclear Facilities project were simplified and adopted. The weighting scheme for ground motion models was updated, giving more weight to models with a faster attenuation with distance compared to the previous maps. Overall, hazard changes (2% probability of exceedance in 50 years, across a range of ground-motion frequencies) were smaller than 10% in most of the CUS relative to the 2008 USGS maps despite new ground motion models and their assigned logic tree weights that reduced the probabilistic ground motions by 5–20%.
RF H-minus ion source development in China spallation neutron source
NASA Astrophysics Data System (ADS)
Chen, W.; Ouyang, H.; Xiao, Y.; Liu, S.; Lü, Y.; Cao, X.; Huang, T.; Xue, K.
2017-08-01
China Spallation Neutron Source (CSNS) phase-I project currently uses a Penning surface plasma H- ion source, which has a life time of several weeks with occasional sparks between high voltage electrodes. To extend the life time of the ion source and prepare for the CSNS phase-II, we are trying to develop a RF negative hydrogen ion source with external antenna. The configuration of the source is similar to the DESY external antenna ion source and SNS ion source. However several changes are made to improve the stability and the life time. Firstly, Si3N4 ceramic with high thermal shock resistance, and high thermal conductivity is used for plasma chamber, which can endure an average power of 2000W. Secondly, the water-cooled antenna is brazed on the chamber to improve the energy efficiency. Thirdly, cesium is injected directly to the plasma chamber if necessary, to simplify the design of the converter and the extraction. Area of stainless steel exposed to plasma is minimized to reduce the sputtering and degassing. Instead Mo, Ta, and Pt coated materials are used to face the plasma, which makes the self-cleaning of the source possible.
Pfeuffer, Kevin P.; Schaper, J. Niklas; Shelley, Jacob T.; Ray, Steven J.; Chan, George C.-Y.; Bings, Nicolas H.; Hieftje, Gary M.
2013-01-01
The flowing atmospheric pressure afterglow (FAPA) is a promising new source for atmospheric pressure, ambient desorption/ionization mass spectrometry. However, problems exist with reproducible sample introduction into the FAPA source. To overcome this limitation, a new FAPA geometry has been developed in which concentric tubular electrodes are utilized to form a halo-shaped discharge; this geometry has been termed the halo-FAPA or h-FAPA. With this new geometry, it is still possible to achieve direct desorption and ionization from a surface; however, sample introduction through the inner capillary is also possible and improves interaction between the sample material (solution, vapor, or aerosol) and the plasma to promote desorption and ionization. The h-FAPA operates with a helium gas flow of 0.60 L/min outer, 0.30 L/min inner, applied current of 30 mA at 200 V for 6 watts of power. In addition, separation of the discharge proper and sample material prevents perturbations to the plasma. Optical-emission characterization and gas rotational temperatures reveal that the temperature of the discharge is not significantly affected (< 3% change at 450K) by water vapor during solution-aerosol sample introduction. The primary mass-spectral background species are protonated water clusters, and the primary analyte ions are protonated molecular ions (M+H+). Flexibility of the new ambient sampling source is demonstrated by coupling it with a laser ablation unit, a concentric nebulizer and a droplet-on-demand system for sample introduction. A novel arrangement is also presented in which the central channel of the h-FAPA is used as the inlet to a mass spectrometer. PMID:23808829
Pfeuffer, Kevin P; Schaper, J Niklas; Shelley, Jacob T; Ray, Steven J; Chan, George C-Y; Bings, Nicolas H; Hieftje, Gary M
2013-08-06
The flowing atmospheric-pressure afterglow (FAPA) is a promising new source for atmospheric-pressure, ambient desorption/ionization mass spectrometry. However, problems exist with reproducible sample introduction into the FAPA source. To overcome this limitation, a new FAPA geometry has been developed in which concentric tubular electrodes are utilized to form a halo-shaped discharge; this geometry has been termed the halo-FAPA or h-FAPA. With this new geometry, it is still possible to achieve direct desorption and ionization from a surface; however, sample introduction through the inner capillary is also possible and improves interaction between the sample material (solution, vapor, or aerosol) and the plasma to promote desorption and ionization. The h-FAPA operates with a helium gas flow of 0.60 L/min outer, 0.30 L/min inner, and applied current of 30 mA at 200 V for 6 W of power. In addition, separation of the discharge proper and sample material prevents perturbations to the plasma. Optical-emission characterization and gas rotational temperatures reveal that the temperature of the discharge is not significantly affected (<3% change at 450 K) by water vapor during solution-aerosol sample introduction. The primary mass-spectral background species are protonated water clusters, and the primary analyte ions are protonated molecular ions (M + H(+)). Flexibility of the new ambient sampling source is demonstrated by coupling it with a laser ablation unit, a concentric nebulizer, and a droplet-on-demand system for sample introduction. A novel arrangement is also presented in which the central channel of the h-FAPA is used as the inlet to a mass spectrometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
Ground heat flux and power sources of low-enthalpy geothermal systems
NASA Astrophysics Data System (ADS)
Bayer, Peter; Blum, Philipp; Rivera, Jaime A.
2015-04-01
Geothermal heat pumps commonly extract energy from the shallow ground at depths as low as approximately 400 m. Vertical borehole heat exchangers are often applied, which are seasonally operated for decades. During this lifetime, thermal anomalies are induced in the ground and surface-near aquifers, which often grow over the years and which alleviate the overall performance of the geothermal system. As basis for prediction and control of the evolving energy imbalance in the ground, focus is typically set on the ground temperatures. This is reflected in regulative temperature thresholds, and in temperature trends, which serve as indicators for renewability and sustainability. In our work, we examine the fundamental heat flux and power sources, as well as their temporal and spatial variability during geothermal heat pump operation. The underlying rationale is that for control of ground temperature evolution, knowledge of the primary heat sources is fundamental. This insight is also important to judge the validity of simplified modelling frameworks. For instance, we reveal that vertical heat flux from the surface dominates the basal heat flux towards a borehole. Both fluxes need to be accounted for as proper vertical boundary conditions in the model. Additionally, the role of horizontal groundwater advection is inspected. Moreover, by adopting the ground energy deficit and long-term replenishment as criteria for system sustainability, an uncommon perspective is adopted that is based on the primary parameter rather than induced local temperatures. In our synthetic study and dimensionless analysis, we demonstrate that time of ground energy recovery after system shutdown may be longer than what is expected from local temperature trends. In contrast, unrealistically long recovery periods and extreme thermal anomalies are predicted without account for vertical ground heat fluxes and only when the energy content of the geothermal reservoir is considered.
McGeachy, P; Khan, R
2012-07-01
In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.
Effect of seasonal and long-term changes in stress on sources of water to wells
Reilly, Thomas E.; Pollock, David W.
1995-01-01
The source of water to wells is ultimately the location where the water flowing to a well enters the boundary surface of the ground-water system . In ground-water systems that receive most of their water from areal recharge, the location of the water entering the system is at the water table . The area contributing recharge to a discharging well is the surface area that defines the location of the water entering the groundwater system. Water entering the system at the water table flows to the well and is eventually discharged from the well. Many State agencies are currently (1994) developing wellhead-protection programs. The thrust of some of these programs is to protect water supplies by determining the areas contributing recharge to water-supply wells and by specifying regulations to minimize the opportunity for contamination of the recharge water by activities at the land surface. In the analyses of ground-water flow systems, steady-state average conditions are frequently used to simplify the problem and make a solution tractable. Recharge is usually cyclic in nature, however, having seasonal cycles and longer term climatic cycles. A hypothetical system is quantitatively analyzed to show that, in many cases, these cyclic changes in the recharge rates apparently do not significantly affect the location and size of the areas contributing recharge to wells. The ratio of the mean travel time to the length of the cyclic stress period appears to indicate whether the transient effects of the cyclic stress must be explicitly represented in the analysis of contributing areas to wells. For the cases examined, if the ratio of the mean travel time to the period of the cyclic stress was much greater than one, then the transient area contributing recharge to wells was similar to the area calculated using an average steady-state condition. Noncyclic long-term transient changes in water use, however, and cyclic stresses on systems with ratios less than 1 can and do affect the location and size of the areas contributing recharge to wells.
Large area multiarc ion beam source {open_quote}MAIS{close_quote}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelko, V.; Giese, H.; Schalk, S.
1996-12-31
A pulsed large area intense ion beam source is described, in which the ion emitting plasma is built up by an array of individual discharge units, homogeneously distributed over the surface of a common discharge electrode. A particularly advantageous feature of the source is that for plasma generation and subsequent acceleration of the ions only one common energy supply is necessary. This allows to simplify the source design and provides inherent synchronization of plasma production and ion extraction. The homogeneity of the plasma density was found to be superior to plasma sources using plasma expanders. Originally conceived for the productionmore » of proton beams, the source can easily be modified for the production of beams composed of carbon and metal ions or mixed ion species. Results of investigations of the source performance for the production of a proton beam are presented. The maximum beam current achieved to date is of the order of 100 A, with a particle kinetic energy of 15 - 30 keV and a pulse length in the range of 10 {mu}s.« less
Black holes in higher derivative gravity.
Lü, H; Perkins, A; Pope, C N; Stelle, K S
2015-05-01
Extensions of Einstein gravity with higher-order derivative terms arise in string theory and other effective theories, as well as being of interest in their own right. In this Letter we study static black-hole solutions in the example of Einstein gravity with additional quadratic curvature terms. A Lichnerowicz-type theorem simplifies the analysis by establishing that they must have vanishing Ricci scalar curvature. By numerical methods we then demonstrate the existence of further black-hole solutions over and above the Schwarzschild solution. We discuss some of their thermodynamic properties, and show that they obey the first law of thermodynamics.
RMS massless arm dynamics capability in the SVDS. [equations of motion
NASA Technical Reports Server (NTRS)
Flanders, H. A.
1977-01-01
The equations of motion for the remote manipulator system, assuming that the masses and inertias of the arm can be neglected, are developed for implementation into the space vehicle dynamics simulation (SVDS) program for the Orbiter payload system. The arm flexibility is incorporated into the equations by the computation of flexibility terms for use in the joint servo model. The approach developed in this report is based on using the Jacobian transformation matrix to transform force and velocity terms between the configuration space and the task space to simplify the form of the equations.
A relativistic analysis of clock synchronization
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1974-01-01
The relativistic conversion between coordinate time and atomic time is reformulated to allow simpler time calculations relating analysis in solar-system barycentric coordinates (using coordinate time) with earth-fixed observations (measuring earth-bound proper time or atomic time.) After an interpretation of terms, this simplified formulation, which has a rate accuracy of about 10 to the minus 15th power, is used to explain the conventions required in the synchronization of a world wide clock network and to analyze two synchronization techniques-portable clocks and radio interferometry. Finally, pertinent experiment tests of relativity are briefly discussed in terms of the reformulated time conversion.
Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.
Cave, N J; Koo, S T
2015-01-01
Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Testing the stand-alone microbeam at Columbia University.
Garty, G; Ross, G J; Bigelow, A W; Randers-Pehrson, G; Brenner, D J
2006-01-01
The stand-alone microbeam at Columbia University presents a novel approach to biological microbeam irradiation studies. Foregoing a conventional accelerator as a source of energetic ions, a small, high-specific-activity, alpha emitter is used. Alpha particles emitted from this source are focused using a compound magnetic lens consisting of 24 permanent magnets arranged in two quadrupole triplets. Using a 'home made' 6.5 mCi polonium source, a 1 alpha particle s(-1), 10 microm diameter microbeam can, in principle, be realised. As the alpha source energy is constant, once the microbeam has been set up, no further adjustments are necessary apart from a periodic replacement of the source. The use of permanent magnets eliminates the need for bulky power supplies and cooling systems required by other types of ion lenses and greatly simplifies operation. It also makes the microbeam simple and cheap enough to be realised in any large lab. The Microbeam design as well as first tests of its performance, using an accelerator-based beam are presented here.
R. Haggerty
2013-01-01
In this technical note, a steady-state analytical solution of concentrations of a parent solute reacting to a daughter solute, both of which are undergoing transport and multirate mass transfer, is presented. Although the governing equations are complicated, the resulting solution can be expressed in simple terms. A function of the ratio of concentrations, In (daughter...
Piao, Daqing; Barbour, Randall L.; Graber, Harry L.; Lee, Daniel C.
2015-01-01
Abstract. This work analytically examines some dependences of the differential pathlength factor (DPF) for steady-state photon diffusion in a homogeneous medium on the shape, dimension, and absorption and reduced scattering coefficients of the medium. The medium geometries considered include a semi-infinite geometry, an infinite-length cylinder evaluated along the azimuthal direction, and a sphere. Steady-state photon fluence rate in the cylinder and sphere geometries is represented by a form involving the physical source, its image with respect to the associated extrapolated half-plane, and a radius-dependent term, leading to simplified formula for estimating the DPFs. With the source-detector distance and medium optical properties held fixed across all three geometries, and equal radii for the cylinder and sphere, the DPF is the greatest in the semi-infinite and the smallest in the sphere geometry. When compared to the results from finite-element method, the DPFs analytically estimated for 10 to 25 mm source–detector separations on a sphere of 50 mm radius with μa=0.01 mm−1 and μs′=1.0 mm−1 are on average less than 5% different. The approximation for sphere, generally valid for a diameter ≥20 times of the effective attenuation pathlength, may be useful for rapid estimation of DPFs in near-infrared spectroscopy of an infant head and for short source–detector separation. PMID:26465613
PD5: a general purpose library for primer design software.
Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda
2013-01-01
Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.
The Goldwater Nichols Act of 1986: 30 Years of Acquisition Reform
2016-12-01
making it more difficult to purchase commercial items from a sole source when the value is above the simplified acquisition threshold; such language ...that private-sector witnesses supported language in the bill that proposed “an aggressive training program for the acquisition workforce” because “a...TECHNOLOGICAL EDGE ACT (2015) The most recent effort before the current sweeping reform language proposed in the 2017 NDAA was the Agile Acquisition to Retain
[Organization of monitoring of electromagnetic radiation in the urban environment].
Savel'ev, S I; Dvoeglazova, S V; Koz'min, V A; Kochkin, D E; Begishev, M R
2008-01-01
The authors describe new current approaches to monitoring the environment, including the sources of electromagnetic radiation and noise. Electronic maps of the area under study are shown to be made, by constructing the isolines or distributing the actual levels of controlled factors. These current approaches to electromagnetic and acoustic monitoring make it possible to automate a process of measurements, to analyze the established situation, and to simplify the risk controlling methodology.
NASA Astrophysics Data System (ADS)
Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding
2017-04-01
Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed model can be used to identify the effective operation time of FGD and SCR equipped in coal-fired power plant.
NASA Astrophysics Data System (ADS)
Johnston, C. D.; Davis, G. B.; Bastow, T.; Annable, M. D.; Trefry, M. G.; Furness, A.; Geste, Y.; Woodbury, R.; Rhodes, S.
2011-12-01
Measures of the source mass and depletion characteristics of recalcitrant dense non-aqueous phase liquid (DNAPL) contaminants are critical elements for assessing performance of remediation efforts. This is in addition to understanding the relationships between source mass depletion and changes to dissolved contaminant concentration and mass flux in groundwater. Here we present results of applying analytical source-depletion concepts to pumping from within the DNAPL source zone of a 10-m thick heterogeneous layered aquifer to estimate the original source mass and characterise the time trajectory of source depletion and mass flux in groundwater. The multi-component, reactive DNAPL source consisted of the brominated solvent tetrabromoethane (TBA) and its transformation products (mostly tribromoethene - TriBE). Coring and multi-level groundwater sampling indicated the DNAPL to be mainly in lower-permeability layers, suggesting the source had already undergone appreciable depletion. Four simplified source dissolution models (exponential, power function, error function and rational mass) were able to describe the concentration history of the total molar concentration of brominated organics in extracted groundwater during 285 days of pumping. Approximately 152 kg of brominated compounds were extracted. The lack of significant kinetic mass transfer limitations in pumped concentrations was notable. This was despite the heterogeneous layering in the aquifer and distribution of DNAPL. There was little to choose between the model fits to pumped concentration time series. The variance of groundwater velocities in the aquifer determined during a partitioning inter-well tracer test (PITT) were used to parameterise the models. However, the models were found to be relatively insensitive to this parameter. All models indicated an initial source mass around 250 kg which compared favourably to an estimate of 220 kg derived from the PITT. The extrapolated concentrations from the dissolution models diverged, showing disparate approaches to possible remediation objectives. However, it also showed that an appreciable proportion of the source would need to be removed to discriminate between the models. This may limit the utility of such modelling early in the history of a DNAPL source. A further limitation is the simplified approach of analysing the combined parent/daughter compounds with different solubilities as a total molar concentration. Although the fitted results gave confidence to this approach, there were appreciable changes in relative abundance. The dissolution and partitioning processes are discussed in relation to the lower-solubility TBA becoming dominant in pumped groundwater over time, despite its known rapid transformation to TriBE. These processes are also related to the architecture of the depleting source as revealed by multi-level groundwater sampling under reversed pumping/injection conditions.
Development of a Comprehensive Community Nitrogen Oxide Emissions Reduction Toolkit (CCNERT)
NASA Astrophysics Data System (ADS)
Sung, Yong Hoon
The main objective of this study is to research and develop a simplified tool to estimate energy use in a community and its associated effects on air pollution. This tool is intended to predict the impacts of selected energy conservation options and efficiency programs on emission reduction. It is intended to help local government and their residents understand and manage information collection and the procedures to be used. This study presents a broad overview of the community-wide energy use and NOx emissions inventory process. It also presents various simplified procedures to estimate each sector's energy use. In an effort to better understand community-wide energy use and its associated NOx emissions, the City of College Station, Texas, was selected as a case study community for this research. While one community might successfully reduce the production of NOx emissions by adopting electricity efficiency programs in its buildings, another community might be equally successful by changing the mix of fuel sources used to generate electricity, which is consumed by the community. In yet a third community low NOx automobiles may be mandated. Unfortunately, the impact and cost of one strategy over another changes over time as major sources of pollution are reduced. Therefore, this research proposes to help community planners answer these questions and to assist local communities with their NOx emission reduction plans by developing a Comprehensive Community NOx Emissions Reduction Toolkit (CCNERT). The proposed simplified tool could have a substantial impact on reducing NOx emission by providing decision-makers with a preliminary understanding about the impacts of various energy efficiency programs on emissions reductions. To help decision makers, this study has addressed these issues by providing a general framework for examining how a community's non-renewable energy use leads to NOx emissions, by quantifying each end-user's energy usage and its associated NOx emissions, and by evaluating the environmental benefits of various types of energy saving options.
Yamamoto, Hiroshi; Ogawa, Kenichi; Huaman Battifora, Henry; Yamamuro, Kaori; Ishitake, Tatsuya
2018-05-24
Cognitive dysfunction due to delirium or dementia is a common finding in acutely ill geriatric patients, but often remains undetected. A brief and sensitive clinical identification method could prevent errors or complications while evaluating the mental status of elderly patients. To evaluate the usefulness and clinical implications of the revised simplified short-term memory recall test (STMT-R) in geriatric patients admitted in the emergency department; with age, gender, dementia history, serum albumin, underlying diseases and clinical outcome used as comparative factors. Mini-mental state examination and STMT-R scores were initially compared and a positive correlation was observed (r = 0.66, p < 0.001). Subsequently, 885 inpatients aged over 50 years underwent STMT-R evaluation between October 2014 and September 2015. We considered as cognitive dysfunction STMT-R scores ≤ 4 of a maximum score of 8. Among enrolled patients, 52.2% were female and the mean age was 78.9 years. There were 159 patients who were unable to complete the test (incomplete testing group). We observed cognitive dysfunction in 460 patients, while 266 did not have cognitive dysfunction. There were significant differences between those with and without cognitive dysfunction in terms of age, dementia history, underlying respiratory diseases, and hospital outcome. Cognitive dysfunction at admission can have a negative effect on the hospital outcomes of elderly patients. Age, a history of dementia and underlying respiratory diseases may also influence cognitive functional decline.
Effect of Experience of Use on The Process of Formation of Stereotype Images on Shapes of Products
NASA Astrophysics Data System (ADS)
Kwak, Yong-Min; Yamanaka, Toshimasa
It is necessary to explain the terms used in this research to help the readers better understand the contents of this research. Originally stereotype meant the lead plate cast from a mold of letterpress printing, but now it is used as a term indicating a simplified and fixed notion toward certain group “Knowledge in fixed form” or a term indicating an image simplified and generalized over the members of certain group.[1] Generally, stereotype is used in negative cases, but has both sides of positive and negative view.[2, 3] I believe that a research on the factors of forming stereotype[4] images commonly felt by a large number of persons may suggest a new research methodology for the areas which require high level of creative thinking such as areas of design and researches on emotions. Stereotype images appear between persons, groups and images of countries, enterprises and other organizations. For example, as we usually hear words saying ‘He maybe oo because he is from oo’, we have strong images of characteristics commonly held by the persons who belong to certain categories after tying regions and persons to dividing categories.[5, 6] In this research, I define such images as the stereotype images. This kind of phenomenon appears for the articles used in daily lives. In this research, I established a hypothesis that stereotype images exist for products and underwent the process of verification through experiments.
NASA Technical Reports Server (NTRS)
Hoots, F. R.; Fitzpatrick, P. M.
1979-01-01
The classical Poisson equations of rotational motion are used to study the attitude motions of an earth orbiting, rapidly spinning gyroscope perturbed by the effects of general relativity (Einstein theory). The center of mass of the gyroscope is assumed to move about a rotating oblate earth in an evolving elliptic orbit which includes all first-order oblateness effects produced by the earth. A method of averaging is used to obtain a transformation of variables, for the nonresonance case, which significantly simplifies the Poisson differential equations of motion of the gyroscope. Long-term solutions are obtained by an exact analytical integration of the simplified transformed equations. These solutions may be used to predict both the orientation of the gyroscope and the motion of its rotational angular momentum vector as viewed from its center of mass. The results are valid for all eccentricities and all inclinations not near the critical inclination.
The refractive index in electron microscopy and the errors of its approximations.
Lentzen, M
2017-05-01
In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
Incompressible Navier-Stokes Computations with Heat Transfer
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Kutler, Paul (Technical Monitor)
1994-01-01
The existing pseudocompressibility method for the system of incompressible Navier-Stokes equations is extended to heat transfer problems by including the energy equation. The solution method is based on the pseudo compressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Current computations use one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. Both forced and natural convection problems are examined. Numerical results from turbulent reattaching flow behind a backward-facing step will be compared against experimental measurements for the forced convection case. The validity of Boussinesq approximation to simplify the buoyancy force term will be investigated. The natural convective flow structure generated by heat transfer in a vertical rectangular cavity will be studied. The numerical results will be compared by experimental measurements by Morrison and Tran.
Relativistic theory of surficial Love numbers
NASA Astrophysics Data System (ADS)
Landry, Philippe; Poisson, Eric
2014-06-01
A relativistic theory of surficial Love numbers, which characterize the surface deformation of a body subjected to tidal forces, was initiated by Damour and Nagar. We revisit this effort in order to extend it, clarify some of its aspects, and simplify its computational implementation. First, we refine the definition of surficial Love numbers proposed by Damour and Nagar and formulate it directly in terms of the deformed curvature of the body's surface, a meaningful geometrical quantity. Second, we develop a unified theory of surficial Love numbers that applies equally well to material bodies and black holes. Third, we derive a compactness-dependent relation between the surficial and (electric-type) gravitational Love numbers of a perfect-fluid body and show that it reduces to the familiar Newtonian relation when the compactness is small. And fourth, we simplify the tasks associated with the practical computation of the surficial and gravitational Love numbers for a material body.
Vitoria, Marco; Ford, Nathan; Doherty, Meg; Flexner, Charles
2014-01-01
The global scale-up of antiretroviral therapy (ART) over the past decade represents one of the great public health and human rights achievements of recent times. Moving from an individualized treatment approach to a simplified and standardized public health approach has been critical to ART scale-up, simplifying both prescribing practices and supply chain management. In terms of the latter, the risk of stock-outs can be reduced and simplified prescribing practices support task shifting of care to nursing and other non-physician clinicians; this strategy is critical to increase access to ART care in settings where physicians are limited in number. In order to support such simplification, successive World Health Organization guidelines for ART in resource-limited settings have aimed to reduce the number of recommended options for first-line ART in such settings. Future drug and regimen choices for resource-limited settings will likely be guided by the same principles that have led to the recommendation of a single preferred regimen and will favour drugs that have the following characteristics: minimal risk of failure, efficacy and tolerability, robustness and forgiveness, no overlapping resistance in treatment sequencing, convenience, affordability, and compatibility with anti-TB and anti-hepatitis treatments.
Application of statistical mining in healthcare data management for allergic diseases
NASA Astrophysics Data System (ADS)
Wawrzyniak, Zbigniew M.; Martínez Santolaya, Sara
2014-11-01
The paper aims to discuss data mining techniques based on statistical tools in medical data management in case of long-term diseases. The data collected from a population survey is the source for reasoning and identifying disease processes responsible for patient's illness and its symptoms, and prescribing a knowledge and decisions in course of action to correct patient's condition. The case considered as a sample of constructive approach to data management is a dependence of allergic diseases of chronic nature on some symptoms and environmental conditions. The knowledge summarized in a systematic way as accumulated experience constitutes to an experiential simplified model of the diseases with feature space constructed of small set of indicators. We have presented the model of disease-symptom-opinion with knowledge discovery for data management in healthcare. The feature is evident that the model is purely data-driven to evaluate the knowledge of the diseases` processes and probability dependence of future disease events on symptoms and other attributes. The example done from the outcomes of the survey of long-term (chronic) disease shows that a small set of core indicators as 4 or more symptoms and opinions could be very helpful in reflecting health status change over disease causes. Furthermore, the data driven understanding of the mechanisms of diseases gives physicians the basis for choices of treatment what outlines the need of data governance in this research domain of discovered knowledge from surveys.
Spectral Characteristics of Wake Vortex Sound During Roll-Up
NASA Technical Reports Server (NTRS)
Booth, Earl R., Jr. (Technical Monitor); Zhang, Yan; Wang, Frank Y.; Hardin, Jay C.
2003-01-01
This report presents an analysis of the sound spectra generated by a trailing aircraft vortex during its rolling-up process. The study demonstrates that a rolling-up vortex could produce low frequency (less than 100 Hz) sound with very high intensity (60 dB above threshold of human hearing) at a distance of 200 ft from the vortex core. The spectrum then drops o rapidly thereafter. A rigorous analytical approach has been adopted in this report to derive the spectrum of vortex sound. First, the sound pressure was solved from an alternative treatment of the Lighthill s acoustic analogy approach [1]. After the application of Green s function for free space, a tensor analysis was applied to permit the removal of the source term singularity of the wave equation in the far field. Consequently, the sound pressure is expressed in terms of the retarded time that indicates the time history and spacial distribution of the sound source. The Fourier transformation is then applied to the sound pressure to compute its spectrum. As a result, the Fourier transformation greatly simplifies the expression of the vortex sound pressure involving the retarded time, so that the numerical computation is applicable with ease for axisymmetric line vortices during the rolling-up process. The vortex model assumes that the vortex circulation is proportional to the time and the core radius is a constant. In addition, the velocity profile is assumed to be self-similar along the aircraft flight path, so that a benchmark vortex velocity profile can be devised to obtain a closed form solution, which is then used to validate the numerical calculations for other more realistic vortex profiles for which no closed form solutions are available. The study suggests that acoustic sensors operating at low frequency band could be profitably deployed for detecting the vortex sound during the rolling-up process.
NASA Astrophysics Data System (ADS)
Mori, K.; Tada, K.; Tawara, Y.; Tosaka, H.; Ohno, K.; Asami, M.; Kosaka, K.
2015-12-01
Since the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident, intensive monitoring and modeling works on radionuclide transfer in environment have been carried out. Although Cesium (Cs) concentration has been attenuating due to both physical and environmental half-life (i.e., wash-off by water and sediment), the attenuation rate depends clearly on the type of land use and land cover. In the Fukushima case, studying the migration in forest land use is important for predicting the long-term behavior of Cs because most of the contaminated region is covered by forests. Atmospheric fallout is characterized by complicated behavior in biogeochemical cycle in forests which can be described by biotic/abiotic interactions between many components. In developing conceptual and mathematical model on Cs transfer in forest ecosystem, defining the dominant components and their interactions are crucial issues (BIOMASS, 1997-2001). However, the modeling of fate and transport in geosphere after Cs exports from the forest ecosystem is often ignored. An integrated watershed modeling for simulating spatiotemporal redistribution of Cs that includes the entire region from source to mouth and surface to subsurface, has been recently developed. Since the deposited Cs can migrate due to water and sediment movement, the different species (i.e., dissolved and suspended) and their interactions are key issues in the modeling. However, the initial inventory as source-term was simplified to be homogeneous and time-independent, and biogeochemical cycle in forests was not explicitly considered. Consequently, it was difficult to evaluate the regionally-inherent characteristics which differ according to land uses, even if the model was well calibrated. In this study, we combine the different advantages in modeling of forest ecosystem and watershed. This enable to include more realistic Cs deposition and time series of inventory can be forced over the land surface. These processes are integrated into the watershed simulator GETFLOWS coupled with biogeochemical cycling in forests. We present brief a overview of the simulator and an application for reservoir basin.
Benchmarking Data for the Proposed Signature of Used Fuel Casks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Eric Benton
2016-09-23
A set of benchmarking measurements to test facets of the proposed extended storage signature was conducted on May 17, 2016. The measurements were designed to test the overall concept of how the proposed signature can be used to identify a used fuel cask based only on the distribution of neutron sources within the cask. To simulate the distribution, 4 Cf-252 sources were chosen and arranged on a 3x3 grid in 3 different patterns and raw neutron totals counts were taken at 6 locations around the grid. This is a very simplified test of the typical geometry studied previously in simulationmore » with simulated used nuclear fuel.« less
NASA Astrophysics Data System (ADS)
De Lucia, Marco; Kempka, Thomas; Kühn, Michael
2014-05-01
Fully-coupled reactive transport simulations involving multiphase hydrodynamics and chemical reactions in heterogeneous settings are extremely challenging from a computational point of view. This often leads to oversimplification of the investigated system: coarse spatial discretization, to keep the number of elements in the order of few thousands; simplified chemistry, disregarding many potentially important reactions. A novel approach for coupling non-reactive hydrodynamic simulations with the outcome of single batch geochemical simulations was therefore introduced to assess the potential long-term mineral trapping at the Ketzin pilot site for underground CO2 storage in Germany [1],[2]. The advantage of the coupling is the ability to use multi-million grid non-reactive hydrodynamics simulations on one side and few batch 0D geochemical simulations on the other, so that the complexity of both systems does not need to be reduced. This contribution shows the approach which was taken to validate this simplified coupling scheme. The procedure involved batch simulations of the reference geochemical model, then performing both non-reactive and fully coupled 1D and 3D reactive transport simulations and finally applying the simplified coupling scheme based on the non-reactive and geochemical batch model. The TOUGHREACT/ECO2N [3] simulator was adopted for the validation. The degree of refinement of the spatial grid and the complexity and velocity of the mineral reactions, along with a cut-off value for the minimum concentration of dissolved CO2 allowed to originate precipitates in the simplified approach were found out to be the governing parameters for the convergence of the two schemes. Systematic discrepancies between the approaches are not reducible, simply because there is no feedback between chemistry and hydrodynamics, and can reach 20 % - 30 % in unfavourable cases. However, even such discrepancy is completely acceptable, in our opinion, given the amount of uncertainty underlying the geochemical models. References [1] Klein, E., De Lucia, M., Kempka, T. Kühn, M. 2013. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geochemical modelling and reservoir simulation. International Journal of Greenhouse Gas Control 19: 720-730, doi:10.1016/j.ijggc.2013.05.014 [2] Kempka, T., Klein, E., De Lucia, M., Tillner, E. Kühn, M. 2013. Assessment of Long-term CO2 Trapping Mechanisms at the Ketzin Pilot Site (Germany) by Coupled Numerical Modelling. Energy Procedia 37: 5419-5426, doi:10.1016/j.egypro.2013.06.460 [3] Xu, T., Spycher, N., Sonnenthal, E., Zhang, G., Zheng, L., Pruess, K. 2010. TOUGHREACT Version 2.0: A simulator for subsurface reactive transport under non-isothermal multiphase flow conditions, Computers & Geosciences 37(6), doi:10.1016/j.cageo.2010.10.007
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick
The framework created through the Open-Source Integrated Design-Analysis Environment (IDAE) for Nuclear Energy Advanced Modeling & Simulation grant has simplify and democratize advanced modeling and simulation in the nuclear energy industry that works on a range of nuclear engineering applications. It leverages millions of investment dollars from the Department of Energy's Office of Nuclear Energy for modeling and simulation of light water reactors and the Office of Nuclear Energy's research and development. The IDEA framework enhanced Kitware’s Computational Model Builder (CMB) while leveraging existing open-source toolkits and creating a graphical end-to-end umbrella guiding end-users and developers through the nuclear energymore » advanced modeling and simulation lifecycle. In addition, the work deliver strategic advancements in meshing and visualization for ensembles.« less
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
NASA Astrophysics Data System (ADS)
Jain, P.; Recchia, M.; Cavenago, M.; Fantz, U.; Gaio, E.; Kraus, W.; Maistrello, A.; Veltri, P.
2018-04-01
Neutral beam injection (NBI) for plasma heating and current drive is necessary for International Thermonuclear Experimental reactor (ITER) tokamak. Due to its various advantages, a radio frequency (RF) driven plasma source type was selected as a reference ion source for the ITER heating NBI. The ITER relevant RF negative ion sources are inductively coupled (IC) devices whose operational working frequency has been chosen to be 1 MHz and are characterized by high RF power density (˜9.4 W cm-3) and low operational pressure (around 0.3 Pa). The RF field is produced by a coil in a cylindrical chamber leading to a plasma generation followed by its expansion inside the chamber. This paper recalls different concepts based on which a methodology is developed to evaluate the efficiency of the RF power transfer to hydrogen plasma. This efficiency is then analyzed as a function of the working frequency and in dependence of other operating source and plasma parameters. The study is applied to a high power IC RF hydrogen ion source which is similar to one simplified driver of the ELISE source (half the size of the ITER NBI source).
Exploring consumer pathways and patterns of use for ...
Background: Humans may be exposed to thousands of chemicals through contact in the workplace, home, and via air, water, food, and soil. A major challenge is estimating exposures to these chemicals, which requires understanding potential exposure routes directly related to how chemicals are used. Objectives: We aimed to assign “use categories” to a database of chemicals, including ingredients in consumer products, to help prioritize which chemicals will be given more scrutiny relative to human exposure potential and target populations. The goal was to identify (a) human activities that result in increased chemical exposure while (b) simplifying the dimensionality of hazard assessment for risk characterization. Methods: Major data sources on consumer- and industrial-process based chemical uses were compiled from multiple countries, including from regulatory agencies, manufacturers, and retailers. The resulting categorical chemical use and functional information are presented through the Chemical/Product Categories Database (CPCat). Results: CPCat contains information on 43,596 unique chemicals mapped to 833 terms categorizing their usage or function. Examples presented demonstrate potential applications of the CPCat database, including the identification of chemicals to which children may be exposed (including those that are not identified on product ingredient labels), and prioritization of chemicals for toxicity screening. The CPCat database is availabl
Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-01-05
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-01-01
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406
NASA Technical Reports Server (NTRS)
Zawodny, Nikolas S.; Boyd, D. Douglas, Jr.
2017-01-01
In this study, hover acoustic measurements are taken on isolated rotor-airframe configurations representative of smallscale, rotary-wing unmanned aircraft systems (UAS). Each rotor-airframe configuration consists of two fixed-pitch blades powered by a brushless motor, with a simplified airframe geometry intended to represent a generic multicopter arm. In addition to acoustic measurements, CFD-based aeroacoustic predictions are implemented on a subset of the experimentally tested rotor-airframe configurations in an effort to better understand the noise content of the rotor-airframe systems. Favorable agreements are obtained between acoustic measurements and predictions, based on both time- and frequency-domain post-processing techniques. Results indicate that close proximity of airframe surfaces result in the generation of considerable tonal acoustic content in the form of harmonics of the rotor blade passage frequency (BPF). Analysis of the acoustic prediction data shows that the presence of the airframe surfaces can generate noise levels either comparable to or greater than the rotor blade surfaces under certain rotor tip clearance conditions. Analysis of the on-surface Ffowcs Williams and Hawkings (FW-H) source terms provide insight as to the predicted physical noise-generating mechanisms on the rotor and airframe surfaces.
Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...
2015-12-23
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less
Background simulations for the wide field imager aboard the ATHENA X-ray Observatory
NASA Astrophysics Data System (ADS)
Hauf, Steffen; Kuster, Markus; Hoffmann, Dieter H. H.; Lang, Philipp-Michael; Neff, Stephan; Pia, Maria Grazia; Strüder, Lothar
2012-09-01
The ATHENA X-ray observatory was a European Space Agency project for a L-class mission. ATHENA was to be based upon a simplified IXO design with the number of instruments and the focal length of the Wolter optics being reduced. One of the two instruments, the Wide Field Imager (WFI) was to be a DePFET based focal plane pixel detector, allowing for high time and spatial resolution spectroscopy in the energy-range between 0.1 and 15 keV. In order to fulfill the mission goals a high sensitivity is essential, especially to study faint and extended sources. Thus a detailed understanding of the detector background induced by cosmic ray particles is crucial. During the mission design generally extensive Monte-Carlo simulations are used to estimate the detector background in order to optimize shielding components and software rejection algorithms. The Geant4 toolkit1,2 is frequently the tool of choice for this purpose. Alongside validation of the simulation environment with XMM-Newton EPIC-pn and Space Shuttle STS-53 data we present estimates for the ATHENA WFI cosmic ray induced background including long-term activation, which demonstrate that DEPFET-technology based detectors are able to achieve the required sensitivity.
A physiologist's view of homeostasis
Cliff, William; Michael, Joel; McFarland, Jenny; Wenderoth, Mary Pat; Wright, Ann
2015-01-01
Homeostasis is a core concept necessary for understanding the many regulatory mechanisms in physiology. Claude Bernard originally proposed the concept of the constancy of the “milieu interieur,” but his discussion was rather abstract. Walter Cannon introduced the term “homeostasis” and expanded Bernard's notion of “constancy” of the internal environment in an explicit and concrete way. In the 1960s, homeostatic regulatory mechanisms in physiology began to be described as discrete processes following the application of engineering control system analysis to physiological systems. Unfortunately, many undergraduate texts continue to highlight abstract aspects of the concept rather than emphasizing a general model that can be specifically and comprehensively applied to all homeostatic mechanisms. As a result, students and instructors alike often fail to develop a clear, concise model with which to think about such systems. In this article, we present a standard model for homeostatic mechanisms to be used at the undergraduate level. We discuss common sources of confusion (“sticky points”) that arise from inconsistencies in vocabulary and illustrations found in popular undergraduate texts. Finally, we propose a simplified model and vocabulary set for helping undergraduate students build effective mental models of homeostatic regulation in physiological systems. PMID:26628646
A Simplified Analytic Investigation of the Riverside Effects of Sediment Diversions
2013-09-01
demonstrated that the river bed consists of a sand layer of variable thickness, underlain by erosion resistant strata (either relict glacial deposits...following analysis. Simplifications and Initial Conditions. Consider a river modeled as a wide rectangular channel of constant width (Figure 1). The...CHETN-VII-13 September 2013 14 Short term effects include the redistribution of sediment by erosion upstream of the diversion to deposition
Experimental Limits on Local Realism with Separable and Entangled Photons
2011-01-01
DATES COVERED (From - To) OCT 2009 – SEP 2011 4. TITLE AND SUBTITLE EXPERIMENTAL LIMITS ON LOCAL REALISM WITH SEPARABLE AND ENTANGLED PHOTONS 5a...realization of the quantum state must be chosen. Entangled photons or electrons provide the most viable choices. In this work we consider a simplified...fewer measurements, and is more advantageous in its conceptual clarity. 15. SUBJECT TERMS polarization- entangled photons , Bell inequalities, local
A Methodology for Benchmarking Relational Database Machines,
1984-01-01
user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey
Optimum resonance control knobs for sextupoles
NASA Astrophysics Data System (ADS)
Ögren, J.; Ziemann, V.
2018-06-01
We discuss the placement of extra sextupoles in a magnet lattice that allows to correct third-order geometric resonances, driven by the chromaticity-compensating sextupoles, in a way that requires the least excitation of the correction sextupoles. We consider a simplified case, without momentum-dependent effects or other imperfections, where suitably chosen phase advances between the correction sextupoles leads to orthogonal knobs with equal treatment of the different resonance driving terms.
Formation Flying through Geodesic Motion and the Different Geometrical Requirements
2006-09-01
APPROXIMATE SOLUTIONS IN A CLOHESSY - WILTSHIRE -TYPE SYSTEM Despite the assumed approximation, the simplified problem (5)+(6) remains complicated for an...analytical approach. For a further simplification let us introduce a CW ( Clohessy - Wiltshire ) referential system [1], [3]. Consider that the trajectory...momentum. Figure 2: The Clohessy - Wiltshire -type referential system, CX1Y1Z1. Neglecting the second order terms, equation (9) reads: (10
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
NASA Astrophysics Data System (ADS)
De Roo, Frederik; Banerjee, Tirtha
2017-04-01
Under non-neutral conditions and in the presence of topography the dynamics of turbulent flow within a canopy is not yet completely understood. This has implications for the measurement of surface-atmosphere exchange by means of eddy-covariance. For example the measurement of carbon dioxide fluxes are strongly influenced if drainage flows happen during night, when the flow within the canopy decouples from the flow aloft. In the present work, we investigate the dynamics of terrain-induced turbulent flow within sloped canopies. We concentrate on the presence of oscillatory behavior in the flow variables in terms of switching of flow regimes by conducting linear stability analysis. We revisit and correct the simplified theory that exists in the literature, which is based on the interplay between the drag force and the buoyancy. We find that the simplified description of this dynamical system cannot exhibit the observed richness of the dynamics. To tackle the full spatiotemporal dynamical system theoretically is beyond the scope of this work, although we can make some qualitative arguments. Additionally, we make use of large-eddy simulation of a three-dimensional hill covered by a homogeneous forest and analyze phase synchronization behavior of the major terms in the momentum budget to explore the turbulent dynamics in more detail.
Decomposed fuzzy systems and their application in direct adaptive fuzzy control.
Hsueh, Yao-Chu; Su, Shun-Feng; Chen, Ming-Chang
2014-10-01
In this paper, a novel fuzzy structure termed as the decomposed fuzzy system (DFS) is proposed to act as the fuzzy approximator for adaptive fuzzy control systems. The proposed structure is to decompose each fuzzy variable into layers of fuzzy systems, and each layer is to characterize one traditional fuzzy set. Similar to forming fuzzy rules in traditional fuzzy systems, layers from different variables form the so-called component fuzzy systems. DFS is proposed to provide more adjustable parameters to facilitate possible adaptation in fuzzy rules, but without introducing a learning burden. It is because those component fuzzy systems are independent so that it can facilitate minimum distribution learning effects among component fuzzy systems. It can be seen from our experiments that even when the rule number increases, the learning time in terms of cycles is still almost constant. It can also be found that the function approximation capability and learning efficiency of the DFS are much better than that of the traditional fuzzy systems when employed in adaptive fuzzy control systems. Besides, in order to further reduce the computational burden, a simplified DFS is proposed in this paper to satisfy possible real time constraints required in many applications. From our simulation results, it can be seen that the simplified DFS can perform fairly with a more concise decomposition structure.
Automatic measurement; Mesures automatiques (in French)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringeard, C.
1974-11-28
By its ability to link-up operations sequentially and memorize the data collected, the computer can introduce a statistical approach in the evaluation of a result. To benefit fully from the advantages of automation, a special effort was made to reduce the programming time to a minimum and to simplify link-ups between the existing system and instruments from different sources. The practical solution of the test laboratory of the C.E.A. Centralized Administration Groupe (GEC) is given.
A novel storage system for cryoEM samples.
Scapin, Giovanna; Prosise, Winifred W; Wismer, Michael K; Strickland, Corey
2017-07-01
We present here a new CryoEM grid boxes storage system designed to simplify sample labeling, tracking and retrieval. The system is based on the crystal pucks widely used by the X-ray crystallographic community for storage and shipping of crystals. This system is suitable for any cryoEM laboratory, but especially for large facilities that will need accurate tracking of large numbers of samples coming from different sources. Copyright © 2017. Published by Elsevier Inc.
Models of earth structure inferred from neodymium and strontium isotopic abundances
Wasserburg, G. J.; DePaolo, D. J.
1979-01-01
A simplified model of earth structure based on the Nd and Sr isotopic characteristics of oceanic and continental tholeiitic flood basalts is presented, taking into account the motion of crustal plates and a chemical balance for trace elements. The resulting structure that is inferred consists of a lower mantle that is still essentially undifferentiated, overlain by an upper mantle that is the residue of the original source from which the continents were derived. PMID:16592688
Mechanical response of thick laminated beams and plates subject to out-of-plane loading
NASA Technical Reports Server (NTRS)
Hiel, C. C.; Brinson, . F.
1989-01-01
The use of simplified elasticity solutions to determine the mechanical response of thick laminated beams and plates subject to out-of-plane loading is demonstrated. Excellent results were obtained which compare favorably with theoretical, numerical and experimental analyses from other sources. The most important characteristic of the solution methodology presented is that it combines great mathematical precision with simplicity. This symbiosis has been needed for design with advanced composite materials.
Gas Diffusion in Fluids Containing Bubbles
NASA Technical Reports Server (NTRS)
Zak, M.; Weinberg, M. C.
1982-01-01
Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
NASA Astrophysics Data System (ADS)
Zhioua, M.; El Aroudi, A.; Belghith, S.; Bosque-Moncusí, J. M.; Giral, R.; Al Hosani, K.; Al-Numay, M.
A study of a DC-DC boost converter fed by a photovoltaic (PV) generator and supplying a constant voltage load is presented. The input port of the converter is controlled using fixed frequency pulse width modulation (PWM) based on the loss-free resistor (LFR) concept whose parameter is selected with the aim to force the PV generator to work at its maximum power point. Under this control strategy, it is shown that the system can exhibit complex nonlinear behaviors for certain ranges of parameter values. First, using the nonlinear models of the converter and the PV source, the dynamics of the system are explored in terms of some of its parameters such as the proportional gain of the controller and the output DC bus voltage. To present a comprehensive approach to the overall system behavior under parameter changes, a series of bifurcation diagrams are computed from the circuit-level switched model and from a simplified model both implemented in PSIM© software showing a remarkable agreement. These diagrams show that the first instability that takes place in the system period-1 orbit when a primary parameter is varied is a smooth period-doubling bifurcation and that the nonlinearity of the PV generator is irrelevant for predicting this phenomenon. Different bifurcation scenarios can take place for the resulting period-2 subharmonic regime depending on a secondary bifurcation parameter. The boundary between the desired period-1 orbit and subharmonic oscillation resulting from period-doubling in the parameter space is obtained by calculating the eigenvalues of the monodromy matrix of the simplified model. The results from this model have been validated with time-domain numerical simulation using the circuit-level switched model and also experimentally from a laboratory prototype. This study can help in selecting the parameter values of the circuit in order to delimit the region of period-1 operation of the converter which is of practical interest in PV systems.
Stec, Sebastian; Śledź, Janusz; Mazij, Mariusz; Raś, Małgorzata; Ludwik, Bartosz; Chrabąszcz, Michał; Śledź, Arkadiusz; Banasik, Małgorzata; Bzymek, Magdalena; Młynarczyk, Krzysztof; Deutsch, Karol; Labus, Michał; Śpikowski, Jerzy; Szydłowski, Lesław
2014-08-01
Although the "near-zero-X-Ray" or "No-X-Ray" catheter ablation (CA) approach has been reported for treatment of various arrhythmias, few prospective studies have strictly used "No-X-Ray," simplified 2-catheter approaches for CA in patients with supraventricular tachycardia (SVT). We assessed the feasibility of a minimally invasive, nonfluoroscopic (MINI) CA approach in such patients. Data were obtained from a prospective multicenter CA registry of patients with regular SVTs. After femoral access, 2 catheters were used to create simple, 3D electroanatomic maps and to perform electrophysiologic studies. Medical staff did not use lead aprons after the first 10 MINI CA cases. A total of 188 patients (age, 45 ± 21 years; 17% <19 years; 55% women) referred for the No-X-Ray approach were included. They were compared to 714 consecutive patients referred for a simplified approach using X-rays (age, 52 ± 18 years; 7% <19 years; 55% women). There were 9 protocol exceptions that necessitated the use of X-rays. Ultimately, 179/188 patients underwent the procedure without fluoroscopy, with an acute success rate of 98%. The procedure times (63 ± 26 vs. 63 ± 29 minutes, P > 0.05), major complications (0% vs. 0%, P > 0.05) and acute (98% vs. 98%, P > 0.05) and long-term (93% vs. 94%, P > 0.05) success rates were similar in the "No-X-Ray" and control groups. Implementation of a strict "No-X-Ray, simplified 2-catheter" CA approach is safe and effective in majority of the patients with SVT. This modified approach for SVTs should be prospectively validated in a multicenter study. © 2014 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udhay Ravishankar; Milos manic
2013-08-01
This paper presents a micro-grid simulator tool useful for implementing and testing multi-agent controllers (SGridSim). As a common engineering practice it is important to have a tool that simplifies the modeling of the salient features of a desired system. In electric micro-grids, these salient features are the voltage and power distributions within the micro-grid. Current simplified electric power grid simulator tools such as PowerWorld, PowerSim, Gridlab, etc, model only the power distribution features of a desired micro-grid. Other power grid simulators such as Simulink, Modelica, etc, use detailed modeling to accommodate the voltage distribution features. This paper presents a SGridSimmore » micro-grid simulator tool that simplifies the modeling of both the voltage and power distribution features in a desired micro-grid. The SGridSim tool accomplishes this simplified modeling by using Effective Node-to-Node Complex Impedance (EN2NCI) models of components that typically make-up a micro-grid. The term EN2NCI models means that the impedance based components of a micro-grid are modeled as single impedances tied between their respective voltage nodes on the micro-grid. Hence the benefit of the presented SGridSim tool are 1) simulation of a micro-grid is performed strictly in the complex-domain; 2) faster simulation of a micro-grid by avoiding the simulation of detailed transients. An example micro-grid model was built using the SGridSim tool and tested to simulate both the voltage and power distribution features with a total absolute relative error of less than 6%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Casey Durst
2009-10-01
This report is the second in a series of guidelines on international safeguards requirements and practices, prepared expressly for the designers of nuclear facilities. The first document in this series is the description of generic international nuclear safeguards requirements pertaining to all types of facilities. These requirements should be understood and considered at the earliest stages of facility design as part of a new process called “Safeguards-by-Design.” This will help eliminate the costly retrofit of facilities that has occurred in the past to accommodate nuclear safeguards verification activities. The following summarizes the requirements for international nuclear safeguards implementation at enrichmentmore » plants, prepared under the Safeguards by Design project, and funded by the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA), Office of NA-243. The purpose of this is to provide designers of nuclear facilities around the world with a simplified set of design requirements and the most common practices for meeting them. The foundation for these requirements is the international safeguards agreement between the country and the International Atomic Energy Agency (IAEA), pursuant to the Treaty on the Non-proliferation of Nuclear Weapons (NPT). Relevant safeguards requirements are also cited from the Safeguards Criteria for inspecting enrichment plants, found in the IAEA Safeguards Manual, Part SMC-8. IAEA definitions and terms are based on the IAEA Safeguards Glossary, published in 2002. The most current specification for safeguards measurement accuracy is found in the IAEA document STR-327, “International Target Values 2000 for Measurement Uncertainties in Safeguarding Nuclear Materials,” published in 2001. For this guide to be easier for the designer to use, the requirements have been restated in plainer language per expert interpretation using the source documents noted. The safeguards agreement is fundamentally a legal document. As such, it is written in a legalese that is understood by specialists in international law and treaties, but not by most outside of this field, including designers of nuclear facilities. For this reason, many of the requirements have been simplified and restated. However, in all cases, the relevant source document and passage is noted so that readers may trace the requirement to the source. This is a helpful living guide, since some of these requirements are subject to revision over time. More importantly, the practices by which the requirements are met are continuously modernized by the IAEA and nuclear facility operators to improve not only the effectiveness of international nuclear safeguards, but also the efficiency. As these improvements are made, the following guidelines should be updated and revised accordingly.« less
Tregidgo, Daniel J; West, Sarah E; Ashmore, Mike R
2013-11-01
Citizen science is having increasing influence on environmental monitoring as its advantages are becoming recognised. However methodologies are often simplified to make them accessible to citizen scientists. We tested whether a recent citizen science survey (the OPAL Air Survey) could detect trends in lichen community composition over transects away from roads. We hypothesised that the abundance of nitrophilic lichens would decrease with distance from the road, while that of nitrophobic lichens would increase. The hypothesised changes were detected along strong pollution gradients, but not where the road source was relatively weak, or background pollution relatively high. We conclude that the simplified OPAL methodology can detect large contrasts in nitrogenous pollution, but it may not be able to detect more subtle changes in pollution exposure. Similar studies are needed in conjunction with the ever-growing body of citizen science work to ensure that the limitations of these methods are fully understood. Copyright © 2013 Elsevier Ltd. All rights reserved.
Formation of the Aerosol of Space Origin in Earth's Atmosphere
NASA Technical Reports Server (NTRS)
Kozak, P. M.; Kruchynenko, V. G.
2011-01-01
The problem of formation of the aerosol of space origin in Earth s atmosphere is examined. Meteoroids of the mass range of 10-18-10-8 g are considered as a source of its origin. The lower bound of the mass range is chosen according to the data presented in literature, the upper bound is determined in accordance with the theory of Whipple s micrometeorites. Basing on the classical equations of deceleration and heating for small meteor bodies we have determined the maximal temperatures of the particles, and altitudes at which they reach critically low velocities, which can be called as velocities of stopping . As a condition for the transformation of a space particle into an aerosol one we have used the condition of non-reaching melting temperature of the meteoroid. The simplified equation of deceleration without earth gravity and barometric formula for the atmosphere density are used. In the equation of heat balance the energy loss for heating is neglected. The analytical solution of the simplified equations is used for the analysis.
NASA Astrophysics Data System (ADS)
Dennis, R. L.; Napelenok, S. L.; Linker, L. C.; Dudek, M.
2012-12-01
Estuaries are adversely impacted by excess reactive nitrogen, Nr, from many point and nonpoint sources, including atmospheric deposition to the watershed and the estuary itself as a nonpoint source. For effective mitigation, trading among sources of Nr is being considered. The Chesapeake Bay Program is working to bring air into its trading scheme, which requires some special air computations. Airsheds are much larger than watersheds; thus, wide-spread or national emissions controls are put in place to achieve major reductions in atmospheric Nr deposition. The tributary nitrogen load reductions allocated to the states to meet the TMDL target for Chesapeake Bay are large and not easy to attain via controls on water point and nonpoint sources. It would help the TMDL process to take advantage of air emissions reductions that would occur with State Implementation Plans that go beyond the national air rules put in place to help meet national ambient air quality standards. There are still incremental benefits from these local or state-level controls on atmospheric emissions. The additional air deposition reductions could then be used to offset water quality controls (air-water trading). What is needed is a source to receptor transfer function that connects air emissions from a state to deposition to a tributary. There is a special source attribution version of the Community Multiscale Air Quality model, CMAQ, (termed DDM-3D) that can estimate the fraction of deposition contributed by labeled emissions (labeled by source or region) to the total deposition across space. We use the CMAQ DDM-3D to estimate simplified state-level delta-emissions to delta-atmospheric-deposition transfer coefficients for each major emission source sector within a state, since local air regulations are promulgated at the state level. The CMAQ 4.7.1 calculations are performed at a 12 km grid size over the airshed domain covering Chesapeake Bay for 2020 CAIR emissions. For results, we first present the fractional contributions of Bay state NOx emissions to the oxidized nitrogen deposition to the Chesapeake Bay watershed and the Bay. We then present example tables of the fractional contributions of Bay state NOx emissions from mobile, off road, power plant and industrial emissions to key tributaries: the Potomac, Susquehanna and James Rivers. Finally, we go through an example for a mobile source NOx reductions in Pennsylvania to show how the tributary load offset would be calculated using the factors generated by CMAQ DDM-3D.
Continuum Fatigue Damage Modeling for Use in Life Extending Control
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.
1994-01-01
This paper develops a simplified continuum (continuous wrp to time, stress, etc.) fatigue damage model for use in Life Extending Controls (LEC) studies. The work is based on zero mean stress local strain cyclic damage modeling. New nonlinear explicit equation forms of cyclic damage in terms of stress amplitude are derived to facilitate the continuum modeling. Stress based continuum models are derived. Extension to plastic strain-strain rate models are also presented. Application of these models to LEC applications is considered. Progress toward a nonzero mean stress based continuum model is presented. Also, new nonlinear explicit equation forms in terms of stress amplitude are also derived for this case.
Gadomski, A; Hladyszowski, J
2015-01-01
An extension of the Coulomb-Amontons law is proposed in terms of an interaction-detail involving renormalization (simplified) n-th level scheme. The coefficient of friction is obtained in a general exponential (nonlinear) form, characteristic of virtually infinite (or, many body) level of the interaction map. Yet, its application for a hydration repulsion bilayered system, prone to facilitated lubrication, is taken as linearly confined, albeit with an inclusion of a decisive repelling force/pressure factor. Some perspectives toward related systems, fairly outside biotribological issues, have been also addressed.
Reformulation of the relativistic conversion between coordinate time and atomic time
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1975-01-01
The relativistic conversion between coordinate time and atomic time is reformulated to allow simpler time calculations relating analysis in solar system barycentric coordinates (using coordinate time) with earth-fixed observations (measuring 'earth-bound' proper time or atomic time). After an interpretation in terms of relatively well-known concepts, this simplified formulation, which has a rate accuracy of about 10 to the minus 15th, is used to explain the conventions required in the synchronization of a worldwide clock network and to analyze two synchronization techniques - portable clocks and radio interferometry. Finally, pertinent experimental tests of relativity are briefly discussed in terms of the reformulated time conversion.
Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul L. Wichlacz
2003-09-01
This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less
Open Source Clinical NLP – More than Any Single System
Masanz, James; Pakhomov, Serguei V.; Xu, Hua; Wu, Stephen T.; Chute, Christopher G.; Liu, Hongfang
2014-01-01
The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP’s mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice. PMID:25954581
High temperature electrons exhausted from rf plasma sources along a magnetic nozzle
NASA Astrophysics Data System (ADS)
Takahashi, Kazunori; Akahoshi, Hikaru; Charles, Christine; Boswell, Rod W.; Ando, Akira
2017-08-01
Two dimensional profiles of electron temperature are measured inside and downstream of a radiofrequency plasma thruster source having a magnetic nozzle and being immersed in vacuum. The temperature is estimated from the slope of the fully swept I-V characteristics of a Langmuir probe acquired at each spatial position and with the assumption of a Maxwellian distribution. The results show that the peripheral high temperature electrons in the magnetic nozzle originate from the upstream antenna location and are transported along the "connecting" magnetic field lines. Two-dimensional measurements of electron energy probability functions are also carried out in a second simplified laboratory device consisting of the source contiguously connected to the diffusion chamber: again the high temperature electrons are detected along the magnetic field lines intersecting the wall at the antenna location, even when the antenna location is shifted along the main axis. These results demonstrate that the peripheral energetic electrons in the magnetic nozzle mirror those created in the source tube.
Power Source Status Estimation and Drive Control Method for Autonomous Decentralized Hybrid Train
NASA Astrophysics Data System (ADS)
Furuya, Takemasa; Ogawa, Kenichi; Yamamoto, Takamitsu; Hasegawa, Hitoshi
A hybrid control system has two main functions: power sharing and equipment protection. In this paper, we discuss the design, construction and testing of a drive control method for an autonomous decentralized hybrid train with 100-kW-class fuel cells (FC) and 36-kWh lithium-ion batteries (Li-Batt). The main objectives of this study are to identify the operation status of the power sources on the basis of the input voltage of the traction inverter and to estimate the maximum traction power control basis of the power-source status. The proposed control method is useful in preventing overload operation of the onboard power sources in an autonomous decentralized hybrid system that has a flexible main circuit configuration and a few control signal lines. Further, with this method, the initial cost of a hybrid system can be reduced and the retrofit design of the hybrid system can be simplified. The effectiveness of the proposed method is experimentally confirmed by using a real-scale hybrid train system.
NASA Astrophysics Data System (ADS)
Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.
2018-04-01
Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods based on total traffic counts and road density distribution suggested that the use of total traffic counts in a simplified form could enhance higher uncertainties in the spatial disaggregation of emissions. Results obtained could add new information that help to improve the air pollution management system in the city and contribute to local public policy decisions. Additionally, this work provides appropriate resolution emission fluxes for ongoing research in atmospheric modeling in the city, with the aim to improve the understanding of transport, transformation and impacts of pollutant emissions in urban air quality.
New paradigms for Salmonella source attribution based on microbial subtyping.
Mughini-Gras, Lapo; Franz, Eelco; van Pelt, Wilfrid
2018-05-01
Microbial subtyping is the most common approach for Salmonella source attribution. Typically, attributions are computed using frequency-matching models like the Dutch and Danish models based on phenotyping data (serotyping, phage-typing, and antimicrobial resistance profiling). Herewith, we critically review three major paradigms facing Salmonella source attribution today: (i) the use of genotyping data, particularly Multi-Locus Variable Number of Tandem Repeats Analysis (MLVA), which is replacing traditional Salmonella phenotyping beyond serotyping; (ii) the integration of case-control data into source attribution to improve risk factor identification/characterization; (iii) the investigation of non-food sources, as attributions tend to focus on foods of animal origin only. Population genetics models or simplified MLVA schemes may provide feasible options for source attribution, although there is a strong need to explore novel modelling options as we move towards whole-genome sequencing as the standard. Classical case-control studies are enhanced by incorporating source attribution results, as individuals acquiring salmonellosis from different sources have different associated risk factors. Thus, the more such analyses are performed the better Salmonella epidemiology will be understood. Reparametrizing current models allows for inclusion of sources like reptiles, the study of which improves our understanding of Salmonella epidemiology beyond food to tackle the pathogen in a more holistic way. Copyright © 2017 Elsevier Ltd. All rights reserved.
Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source
NASA Astrophysics Data System (ADS)
Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.
2014-06-01
To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.
GOMA: functional enrichment analysis tool based on GO modules
Huang, Qiang; Wu, Ling-Yun; Wang, Yong; Zhang, Xiang-Sun
2013-01-01
Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology. A variety of enrichment analysis tools have been developed in recent years, but most output a long list of significantly enriched terms that are often redundant, making it difficult to extract the most meaningful functions. In this paper, we present GOMA, a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules. With this method, we systematically revealed functional GO modules, i.e., groups of functionally similar GO terms, via an optimization model and then ranked them by enrichment scores. Our new method simplifies enrichment analysis results by reducing redundancy, thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results. PMID:23237213
NASA Astrophysics Data System (ADS)
Hu, N.; Green, S. A.
2012-12-01
Smoke near the source of biomass burning contains high concentrations of reactive compounds, with NO and CH3CHO concentrations four to six orders of magnitude higher than those in the ambient atmosphere. Tobacco smoke represents a special case of biomass burning that is quite reproducible in the lab and may elucidate early processes in smoke from other sources. The origins, identities, and reactions of radical species in tobacco smoke are not well understood, despite decades of study on the concentrations and toxicities of the relatively stable compounds in smoke. We propose that reactions of NO2 and aldehydes are a primary source for transient free radicals in tobacco smoke, which contrasts with the long-surmised mechanism of reaction between NO2 and dienes. The objective of this study was to investigate the sources, sinks and cycling of acetyl radical in tobacco smoke. Experimentally, the production of acetyl radical was demonstrated both in tobacco smoke and in a simplified mixture of air combined with NO and acetaldehyde, both of which are significant components of smoke. Acetyl radicals were trapped from the gas phase using 3-amino-2, 2, 5, 5-tetramethyl-proxyl (3AP) on solid support to form stable 3AP adducts for later analysis by high performance liquid chromatography (HPLC), mass spectrometry/tandem mass spectrometry (MS-MS/MS) and liquid chromatography-mass spectrometry (LC-MS). The dynamic nature of radical cycling in smoke makes it impossible to define a fixed concentration of radical species; 2.15×e13-3.18×e14 molecules/cm3 of acetyl radicals were measured from different cigarette samples and smoking conditions. Matlab was employed to simulate reactions of NO, NO2, O2, and a simplified set of organic compounds known to be present in smoke, with a special emphasis on acetaldehyde and the acetyl radical. The NO2/acetaldehyde mechanism initiates a cascade of chain reactions, which accounts for the most prevalent known carbon-centered radicals found in tobacco smoke, and pathways for formation of OH and peroxyl species. Tobacco smoke provides a new perspective of radical generation in a relatively well-defined biomass burning process.
Predictive value of clinical scoring and simplified gait analysis for acetabulum fractures.
Braun, Benedikt J; Wrona, Julian; Veith, Nils T; Rollman, Mika; Orth, Marcel; Herath, Steven C; Holstein, Jörg H; Pohlemann, Tim
2016-12-01
Fractures of the acetabulum show a high, long-term complication rate. The aim of the present study was to determine the predictive value of clinical scoring and standardized, simplified gait analysis on the outcome after these fractures. Forty-one patients with acetabular fractures treated between 2008 and 2013 and available, standardized video recorded aftercare were identified from a prospective database. A visual gait score was used to determine the patients walking abilities 6-m postoperatively. Clinical (Merle d'Aubigne and Postel score, visual analogue scale pain, EQ5d) and radiological scoring (Kellgren-Lawrence score, postoperative computed tomography, and Matta classification) were used to perform correlation and multivariate regression analysis. The average patient age was 48 y (range, 15-82 y), six female patients were included in the study. Mean follow-up was 1.6 y (range, 1-2 y). Moderate correlation between the gait score and outcome (versus EQ5d: r s = 0.477; versus Merle d'Aubigne: r s = 0.444; versus Kellgren-Lawrence: r s = -0.533), as well as high correlation between the Merle d'Aubigne score and outcome were seen (versus EQ5d: r s = 0.575; versus Merle d'Aubigne: r s = 0.776; versus Kellgren-Lawrence: r s = -0.419). Using a multivariate regression model, the 6 m gait score (B = -0.299; P < 0.05) and early osteoarthritis development (B = 1.026; P < 0.05) were determined as predictors of final osteoarthritis. A good fit of the regression model was seen (R 2 = 904). Easy and available clinical scoring (gait score/Merle d'Aubigne) can predict short-term radiological and functional outcome after acetabular fractures with sufficient accuracy. Decisions on further treatment and interventions could be based on simplified gait analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Urban water quality management often requires use of numerical models allowing the evaluation of the cause-effect relationship between the input(s) (i.e. rainfall, pollutant concentrations on catchment surface and in sewer system) and the resulting water quality response. The conventional approach to the system (i.e. sewer system, wastewater treatment plant and receiving water body), considering each component separately, does not enable optimisation of the whole system. However, recent gains in understanding and modelling make it possible to represent the system as a whole and optimise its overall performance. Indeed, integrated urban drainage modelling is of growing interest for tools to cope with Water Framework Directive requirements. Two different approaches can be employed for modelling the whole urban drainage system: detailed and simplified. Each has its advantages and disadvantages. Specifically, detailed approaches can offer a higher level of reliability in the model results, but can be very time consuming from the computational point of view. Simplified approaches are faster but may lead to greater model uncertainty due to an over-simplification. To gain insight into the above problem, two different modelling approaches have been compared with respect to their uncertainty. The first urban drainage integrated model approach uses the Saint-Venant equations and the 1D advection-dispersion equations, for the quantity and for the quality aspects, respectively. The second model approach consists of the simplified reservoir model. The analysis used a parsimonious bespoke model developed in previous studies. For the uncertainty analysis, the Generalised Likelihood Uncertainty Estimation (GLUE) procedure was used. Model reliability was evaluated on the basis of capacity of globally limiting the uncertainty. Both models have a good capability to fit the experimental data, suggesting that all adopted approaches are equivalent both for quantity and quality. The detailed model approach is more robust and presents less uncertainty in terms of uncertainty bands. On the other hand, the simplified river water quality model approach shows higher uncertainty and may be unsuitable for receiving water body quality assessment.
Mass Transfer Cooling Near The Stagnation Point
NASA Technical Reports Server (NTRS)
Roberts, Leonard
1959-01-01
A simplified analysis is made of mass transfer cooling, that is, injection of a foreign gas, near the stagnation point for two-dimensional and axisymmetric bodies. The reduction in heat transfer is given in terms of the properties of the coolant gas and it is shown that the heat transfer may be reduced considerably by the introduction of a gas having appropriate thermal and diffusive properties. The mechanism by which heat transfer is reduced is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1988-07-01
A new semiempirical method that significantly simplifies atomic mass systematics and which provides a method for making mass predictions by linear interpolation is discussed in the context of the nuclear valence space. In certain regions complicated patterns of mass systematics in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences.
1986-05-01
COUNT Technical FROM_ TO May 1986 20 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS iConitinue on reverse if neceasary and identify by...Reactor, Modes of Operation, The AFRRI Reactor, Exposure Facilities, and Cerenkov Radiation. I- 20 DISTRISUTIONIAVAILABILITY OF ABSTRACT 21. ABSTRACT...6 Exposure Facilities 12 Cerenkov Radiation 17 Acoessiofl For NTIS GRA&I DT.C TABUnamnnounced [] UusnriOfltond -. By IZ Distribution/ Availability
Relative resolution: A hybrid formalism for fluid mixtures.
Chaimovich, Aviel; Peter, Christine; Kremer, Kurt
2015-12-28
We show here that molecular resolution is inherently hybrid in terms of relative separation. While nearest neighbors are characterized by a fine-grained (geometrically detailed) model, other neighbors are characterized by a coarse-grained (isotropically simplified) model. We notably present an analytical expression for relating the two models via energy conservation. This hybrid framework is correspondingly capable of retrieving the structural and thermal behavior of various multi-component and multi-phase fluids across state space.
Relative resolution: A hybrid formalism for fluid mixtures
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Peter, Christine; Kremer, Kurt
2015-12-01
We show here that molecular resolution is inherently hybrid in terms of relative separation. While nearest neighbors are characterized by a fine-grained (geometrically detailed) model, other neighbors are characterized by a coarse-grained (isotropically simplified) model. We notably present an analytical expression for relating the two models via energy conservation. This hybrid framework is correspondingly capable of retrieving the structural and thermal behavior of various multi-component and multi-phase fluids across state space.
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Gross, Michael; Kuerklu, Elif
2003-01-01
We did cool stuff to reduce the number of IVPs and BVPs needed to schedule SOFIA by restricting the problem. The restriction costs us little in terms of the value of the flight plans we can build. The restriction allowed us to reformulate part of the search problem as a zero-finding problem. The result is a simplified planning model and significant savings in computation time.
Asymptotic solution of the problem for a thin axisymmetric cavern
NASA Technical Reports Server (NTRS)
Serebriakov, V. V.
1973-01-01
The boundary value problem which describes the axisymmetric separation of the flow around a body by a stationary infinite stream is considered. It is understood that the cavitation number varies over the length of the cavern. Using the asymptotic expansions for the potential of a thin body, the orders of magnitude of terms in the equations of the problem are estimated. Neglecting small quantities, a simplified boundary value problem is obtained.
Simulated breeding with QU-GENE graphical user interface.
Hathorn, Adrian; Chapman, Scott; Dieters, Mark
2014-01-01
Comparing the efficiencies of breeding methods with field experiments is a costly, long-term process. QU-GENE is a highly flexible genetic and breeding simulation platform capable of simulating the performance of a range of different breeding strategies and for a continuum of genetic models ranging from simple to complex. In this chapter we describe some of the basic mechanics behind the QU-GENE user interface and give a simplified example of how it works.
The Effects of Anticholinesterases and Atropine Derivatives on Visual Function in Human Subjects
1988-02-01
preserve life. There is a considerable species difference : for instance, pyridostigmine has practically no protective effect in rats (Gordon et al, 1978...absorption of the drug . This may provide another route, in addition to transcorneal absorption, by which physostigmine evedrops have their central...have been a factor accounting for this difference . In simplifying our results, the term for pupil diameter could reasonably be ignored since its effect
Numerical modeling of an alloy droplet deposition with non-equilibrium solidification
NASA Astrophysics Data System (ADS)
Ramanuj, Vimal
Droplet deposition is a process of extensive relevance to the microfabrication industry. Various bonding and film deposition methods utilize single or multiple droplet impingements on a substrate with subsequent splat formation through simultaneous spreading and solidification. Splat morphology and solidification characteristics play vital roles in determining the final outcome. Experimental methods have limited reach in studying such phenomena owing to the extremely small time and length scales involved. Fundamental understanding of the governing principles of fluid flow, heat transfer and phase change provide effective means of studying such processes through computational techniques. The present study aims at numerically modeling and analyzing the phenomenon of splat formation and phase change in an alloy droplet deposition process. Phase change in alloys occurs non-isothermally and its formulation poses mathematical challenges. A highly non-linear flow field in conjunction with multiple interfaces and convection-diffusion governed phase transition are some of the highlighting features involved in the numerical formulation. Moreover, the non-equilibrium solidification behavior in eutectic systems is of prime concern. The peculiar phenomenon requires special treatments in terms of modeling solid phase species diffusion, liquid phase enrichment during solute partitioning and isothermal eutectic transformation. The flow field is solved using a two-step projection algorithm coupled with enhanced interface modeling schemes. The free surface tracking and reconstruction is achieved through two approaches: VOF-PLIC and CLSVOF to achieve optimum interface accuracy with minimal computational resources. The energy equation is written in terms of enthalpy with an additional source term to account for the phase change. The solidification phenomenon is modeled using a coupled temperature-solute scheme that reflects the microscopic effects arising due to dendritic growth taking place in rapidly solidifying domains. Solid phase diffusion theories proposed in the literature are incorporated in the solute conservation equation through a back diffusion parameter till the eutectic composition; beyond which a special treatment is proposed. A simplified homogeneous mushy region model has also been outline. Both models are employed to reproduce analytical results under limiting conditions and also experimentally verified. The primary objective of the present work is to examine the splat morphology, solidification behavior and microstructural characteristics under varying operational parameters. A simplified homogeneous mushy region model is first applied to study the role of convection in an SS304 droplet deposition with substrate remelting. The results are compared with experimental findings reported in the literature and a good agreement is observed. Furthermore, a hypoeutectic Sn-Pb alloy droplet deposition is studied using a comprehensive coupled temperature solute model that accounts for the non-equilibrium solidification occurring in eutectic type of alloys. Particular focus is laid on the limitations of a homogeneous mushy region assumption, role of species composition in governing solidification, estimation of the microstructural properties and eutectic formation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detwiler, Russell
Matrix diffusion and adsorption within a rock matrix are widely regarded as important mechanisms for retarding the transport of radionuclides and other solutes in fractured rock (e.g., Neretnieks, 1980; Tang et al., 1981; Maloszewski and Zuber, 1985; Novakowski and Lapcevic, 1994; Jardine et al., 1999; Zhou and Xie, 2003; Reimus et al., 2003a,b). When remediation options are being evaluated for old sources of contamination, where a large fraction of contaminants reside within the rock matrix, slow diffusion out of the matrix greatly increases the difficulty and timeframe of remediation. Estimating the rates of solute exchange between fractures and the adjacentmore » rock matrix is a critical factor in quantifying immobilization and/or remobilization of DOE-relevant contaminants within the subsurface. In principle, the most rigorous approach to modeling solute transport with fracture-matrix interaction would be based on local-scale coupled advection-diffusion/dispersion equations for the rock matrix and in discrete fractures that comprise the fracture network (Discrete Fracture Network and Matrix approach, hereinafter referred to as DFNM approach), fully resolving aperture variability in fractures and matrix property heterogeneity. However, such approaches are computationally demanding, and thus, many predictive models rely upon simplified models. These models typically idealize fracture rock masses as a single fracture or system of parallel fractures interacting with slabs of porous matrix or as a mobile-immobile or multi-rate mass transfer system. These idealizations provide tractable approaches for interpreting tracer tests and predicting contaminant mobility, but rely upon a fitted effective matrix diffusivity or mass-transfer coefficients. However, because these fitted parameters are based upon simplified conceptual models, their effectiveness at predicting long-term transport processes remains uncertain. Evidence of scale dependence of effective matrix diffusion coefficients obtained from tracer tests highlights this point and suggests that the underlying mechanisms and relationship between rock and fracture properties are not fully understood in large complex fracture networks. In this project, we developed a high-resolution DFN model of solute transport in fracture networks to explore and quantify the mechanisms that control transport in complex fracture networks and how these may give rise to observed scale-dependent matrix diffusion coefficients. Results demonstrate that small scale heterogeneity in the flow field caused by local aperture variability within individual fractures can lead to long-tailed breakthrough curves indicative of matrix diffusion, even in the absence of interactions with the fracture matrix. Furthermore, the temporal and spatial scale dependence of these processes highlights the inability of short-term tracer tests to estimate transport parameters that will control long-term fate and transport of contaminants in fractured aquifers.« less
A Design Basis for Spacecraft Cabin Trace Contaminant Control
NASA Technical Reports Server (NTRS)
Perry, Jay L.
2009-01-01
Successful trace chemical contamination control is one of the components necessary for achieving good cabin atmospheric quality. While employing seemingly simple process technologies, sizing the active contamination control equipment must employ a reliable design basis for the trace chemical load in the cabin atmosphere. A simplified design basis that draws on experience gained from the International Space Station program is presented. The trace chemical contamination control design load refines generation source magnitudes and includes key chemical functional groups representing both engineering and toxicology challenges.
Multi-Kilovolt Solid-State Picosecond Switch Studies
2013-06-01
waveforms for the SiC device. Figure 7 shows the nanosecond driving pulse and the delayed avalanche breakdown of the SiC device. The driving...of the sharpened pulse RS VS VOLTAGE SOURCE TEST DEVICE VOLTAGE MONITOR R1 R2 TO SCOPE Figure 6. Simplified SiC avalanche diode test setup 0 2 4...Measured waveforms showing nanosecond driving pulse and subnanosecond delayed avalanche dreakdown of SiC device 50 µm 75 µm 10 µm p+ n+n Anode Cathode
NASA Technical Reports Server (NTRS)
Ferguson, R. E.
1985-01-01
The data base verification of the ECLS Systems Assessment Program (ESAP) was documented and changes made to enhance the flexibility of the water recovery subsystem simulations are given. All changes which were made to the data base values are described and the software enhancements performed. The refined model documented herein constitutes the submittal of the General Cluster Systems Model. A source listing of the current version of ESAP is provided in Appendix A.
Rees, Tom
2002-01-01
Froedtert & Medical College, an academic medical center, has adopted a proactive approach to providing consumers with reliable sources of information. The Milwaukee institution has redesigned its Web site, which first opened in 1995. The new version has simplified the navigation process and added new content. Small Stones, a health resource center, also a brick-and-mortar shop, went online Feb. 1. Online bill paying was launched in May. Pharmacy refill functions are expected to be online this summer.
Nowak, Derek B; Lawrence, A J; Sánchez, Erik J
2010-12-10
We present the development of a versatile spectroscopic imaging tool to allow for imaging with single-molecule sensitivity and high spatial resolution. The microscope allows for near-field and subdiffraction-limited far-field imaging by integrating a shear-force microscope on top of a custom inverted microscope design. The instrument has the ability to image in ambient conditions with optical resolutions on the order of tens of nanometers in the near field. A single low-cost computer controls the microscope with a field programmable gate array data acquisition card. High spatial resolution imaging is achieved with an inexpensive CW multiphoton excitation source, using an apertureless probe and simplified optical pathways. The high-resolution, combined with high collection efficiency and single-molecule sensitive optical capabilities of the microscope, are demonstrated with a low-cost CW laser source as well as a mode-locked laser source.
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
The efficiency of the heat pump water heater, during DHW tapping cycle
NASA Astrophysics Data System (ADS)
Gużda, Arkadiusz; Szmolke, Norbert
2017-10-01
This paper discusses one of the most effective systems for domestic hot water (DHW) production based on air-source heat pump with an integrated tank. The operating principle of the heat pump is described in detail. Moreover, there is an account of experimental set-up and results of the measurements. In the experimental part, measurements were conducted with the aim of determining the energy parameters and measures of the economic efficiency related to the presented solution. The measurements that were conducted are based on the tapping cycle that is similar to the recommended one in EN-16147 standard. The efficiency of the air source heat pump during the duration of the experiment was 2.43. In the end of paper, authors conducted a simplified ecological analysis in order to determine the influence of operation of air-source heat pump with integrated tank on the environment. Moreover the compression with the different source of energy (gas boiler with closed combustion chamber and boiler fired by the coal) was conducted. The heat pump is the ecological friendly source of the energy.
LENS: μLENS Simulations, Analysis, and Results
NASA Astrophysics Data System (ADS)
Rasco, Charles
2013-04-01
Simulations of the Low-Energy Neutrino Spectrometer prototype, μLENS, have been performed in order to benchmark the first measurements of the μLENS detector at the Kimballton Underground Research Facility (KURF). μLENS is a 6x6x6 celled scintillation lattice filled with Linear Alkylbenzene based scintillator. We have performed simulations of μLENS using the GEANT4 toolkit. We have measured various radioactive sources, LEDs, and environmental background radiation measurements at KURF using up to 96 PMTs with a simplified data acquisition system of QDCs and TDCs. In this talk we will demonstrate our understanding of the light propagation and we will compare simulation results with measurements of the μLENS detector of various radioactive sources, LEDs, and the environmental background radiation.
Open Source GIS Connectors to NASA GES DISC Satellite Data
NASA Technical Reports Server (NTRS)
Kempler, Steve; Pham, Long; Yang, Wenli
2014-01-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) houses a suite of high spatiotemporal resolution GIS data including satellite-derived and modeled precipitation, air quality, and land surface parameter data. The data are valuable to various GIS research and applications at regional, continental, and global scales. On the other hand, many GIS users, especially those from the ArcGIS community, have difficulties in obtaining, importing, and using our data due to factors such as the variety of data products, the complexity of satellite remote sensing data, and the data encoding formats. We introduce a simple open source ArcGIS data connector that significantly simplifies the access and use of GES DISC data in ArcGIS.
Detector-device-independent quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Charles Ci Wen; Korzh, Boris; Martin, Anthony
2014-12-01
Recently, a quantum key distribution (QKD) scheme based on entanglement swapping, called measurement-device-independent QKD (mdiQKD), was proposed to bypass all measurement side-channel attacks. While mdiQKD is conceptually elegant and offers a supreme level of security, the experimental complexity is challenging for practical systems. For instance, it requires interference between two widely separated independent single-photon sources, and the secret key rates are dependent on detecting two photons—one from each source. Here, we demonstrate a proof-of-principle experiment of a QKD scheme that removes the need for a two-photon system and instead uses the idea of a two-qubit single-photon to significantly simplify themore » implementation and improve the efficiency of mdiQKD in several aspects.« less
The integral line-beam method for gamma skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Bassett, M.S.
1991-03-01
This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less
User Interface Design in Medical Distributed Web Applications.
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
2016-01-01
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
Fully-Coupled Dynamical Jitter Modeling of Momentum Exchange Devices
NASA Astrophysics Data System (ADS)
Alcorn, John
A primary source of spacecraft jitter is due to mass imbalances within momentum exchange devices (MEDs) used for fine pointing, such as reaction wheels (RWs) and variable-speed control moment gyroscopes (VSCMGs). Although these effects are often characterized through experimentation in order to validate pointing stability requirements, it is of interest to include jitter in a computer simulation of the spacecraft in the early stages of spacecraft development. An estimate of jitter amplitude may be found by modeling MED imbalance torques as external disturbance forces and torques on the spacecraft. In this case, MED mass imbalances are lumped into static and dynamic imbalance parameters, allowing jitter force and torque to be simply proportional to wheel speed squared. A physically realistic dynamic model may be obtained by defining mass imbalances in terms of a wheel center of mass location and inertia tensor. The fully-coupled dynamic model allows for momentum and energy validation of the system. This is often critical when modeling additional complex dynamical behavior such as flexible dynamics and fuel slosh. Furthermore, it is necessary to use the fully-coupled model in instances where the relative mass properties of the spacecraft with respect to the RWs cause the simplified jitter model to be inaccurate. This thesis presents a generalized approach to MED imbalance modeling of a rigid spacecraft hub with N RWs or VSCMGs. A discussion is included to convert from manufacturer specifications of RW imbalances to the parameters introduced within each model. Implementations of the fully-coupled RW and VSCMG models derived within this thesis are released open-source as part of the Basilisk astrodynamics software.
Investigation on RGB laser source applied to dynamic photoelastic experiment
NASA Astrophysics Data System (ADS)
Li, Songgang; Yang, Guobiao; Zeng, Weiming
2014-06-01
When the elastomer sustains the shock load or the blast load, its internal stress state of every point will change rapidly over time. Dynamic photoelasticity method is an experimental stress analysis method, which researches the dynamic stress and the stress wave propagation. Light source is one of very important device in dynamic photoelastic experiment system, and the RGB laser light source applied in dynamic photoelastic experiment system is innovative and evolutive to the system. RGB laser is synthesized by red laser, green laser and blue laser, either as a single wavelength laser light source, also as synthesized white laser light source. RGB laser as a light source for dynamic photoelastic experiment system, the colored isochromatic can be captured in dynamic photoelastic experiment, and even the black zero-level stripe can be collected, and the isoclinics can also be collected, which conducively analysis and study of transient stress and stress wave propagation. RGB laser is highly stable and continuous output, and its power can be adjusted. The three wavelengths laser can be synthesized by different power ratio. RGB laser light source used in dynamic photoelastic experiment has overcome a number of deficiencies and shortcomings of other light sources, and simplifies dynamic photoelastic experiment, which has achieved good results.
Blind separation of positive sources by globally convergent gradient search.
Oja, Erkki; Plumbley, Mark
2004-09-01
The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mascali, D.; Gammino, S.; Celona, L.
2012-02-15
Further improvements of electron cyclotron resonance ion sources (ECRIS) output currents and average charge state require a deep understanding of electron and ion dynamics in the plasma. This paper will discuss the most recent advances about modeling of non-classical evidences like the sensitivity of electron energy distribution function to the magnetic field detuning, the influence of plasma turbulences on electron heating and ion confinement, the coupling between electron and ion dynamics. All these issues have in common the non-homogeneous distribution of the plasma inside the source: the abrupt density drop at the resonance layer regulates the heating regimes (from collectivemore » to turbulent), the beam formation mechanism and emittance. Possible means to boost the performances of future ECRIS will be proposed. In particular, the use of Bernstein waves, in preliminary experiments performed at Laboratori Nazionali del Sud (LNS) on MDIS (microwave discharge ion sources)-type sources, has permitted to sustain largely overdense plasmas enhancing the warm electron temperature, which will make possible in principle the construction of sources for high intensity multicharged ions beams with simplified magnetic structures.« less
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Simplifying the EFT of Inflation: generalized disformal transformations and redundant couplings
NASA Astrophysics Data System (ADS)
Bordin, Lorenzo; Cabass, Giovanni; Creminelli, Paolo; Vernizzi, Filippo
2017-09-01
We study generalized disformal transformations, including derivatives of the metric, in the context of the Effective Field Theory of Inflation. All these transformations do not change the late-time cosmological observables but change the coefficients of the operators in the action: some couplings are effectively redundant. At leading order in derivatives and up to cubic order in perturbations, one has 6 free functions that can be used to set to zero 6 of the 17 operators at this order. This is used to show that the tensor three-point function cannot be modified at leading order in derivatives, while the scalar-tensor-tensor correlator can only be modified by changing the scalar dynamics. At higher order in derivatives there are transformations that do not affect the Einstein-Hilbert action: one can find 6 additional transformations that can be used to simplify the inflaton action, at least when the dynamics is dominated by the lowest derivative terms. We also identify the leading higher-derivative corrections to the tensor power spectrum and bispectrum.
Anisotropic elliptic optical fibers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kang, Soon Ahm
1991-01-01
The exact characteristic equation for an anisotropic elliptic optical fiber is obtained for odd and even hybrid modes in terms of infinite determinants utilizing Mathieu and modified Mathieu functions. A simplified characteristic equation is obtained by applying the weakly guiding approximation such that the difference in the refractive indices of the core and the cladding is small. The simplified characteristic equation is used to compute the normalized guide wavelength for an elliptical fiber. When the anisotropic parameter is equal to unity, the results are compared with the previous research and they are in close agreement. For a fixed value normalized cross-section area or major axis, the normalized guide wavelength lambda/lambda(sub 0) for an anisotropic elliptic fiber is small for the larger value of anisotropy. This condition indicates that more energy is carried inside of the fiber. However, the geometry and anisotropy of the fiber have a smaller effect when the normalized cross-section area is very small or very large.
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
NASA Astrophysics Data System (ADS)
Haqiqi, M. T.; Yuliansyah; Suwinarti, W.; Amirta, R.
2018-04-01
Short Rotation Coppice (SRC) system is an option to provide renewable and sustainable feedstock in generating electricity for rural area. Here in this study, we focussed on application of Response Surface Methodology (RSM) to simplify calculation protocols to point out wood chip production and energy potency from some tropical SRC species identified as Bauhinia purpurea, Bridelia tomentosa, Calliandra calothyrsus, Fagraea racemosa, Gliricidia sepium, Melastoma malabathricum, Piper aduncum, Vernonia amygdalina, Vernonia arborea and Vitex pinnata. The result showed that the highest calorific value was obtained from V. pinnata wood (19.97 MJ kg-1) due to its high lignin content (29.84 %, w/w). Our findings also indicated that the use of RSM for estimating energy-electricity of SRC wood had significant term regarding to the quadratic model (R2 = 0.953), whereas the solid-chip ratio prediction was accurate (R2 = 1.000). In the near future, the simple formula will be promising to calculate energy production easily from woody biomass, especially from SRC species.
PropBase Query Layer: a single portal to UK subsurface physical property databases
NASA Astrophysics Data System (ADS)
Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham
2013-04-01
Until recently, the delivery of geological information for industry and public was achieved by geological mapping. Now pervasively available computers mean that 3D geological models can deliver realistic representations of the geometric location of geological units, represented as shells or volumes. The next phase of this process is to populate these with physical properties data that describe subsurface heterogeneity and its associated uncertainty. Achieving this requires capture and serving of physical, hydrological and other property information from diverse sources to populate these models. The British Geological Survey (BGS) holds large volumes of subsurface property data, derived both from their own research data collection and also other, often commercially derived data sources. This can be voxelated to incorporate this data into the models to demonstrate property variation within the subsurface geometry. All property data held by BGS has for many years been stored in relational databases to ensure their long-term continuity. However these have, by necessity, complex structures; each database contains positional reference data and model information, and also metadata such as sample identification information and attributes that define the source and processing. Whilst this is critical to assessing these analyses, it also hugely complicates the understanding of variability of the property under assessment and requires multiple queries to study related datasets making extracting physical properties from these databases difficult. Therefore the PropBase Query Layer has been created to allow simplified aggregation and extraction of all related data and its presentation of complex data in simple, mostly denormalized, tables which combine information from multiple databases into a single system. The structure from each relational database is denormalized in a generalised structure, so that each dataset can be viewed together in a common format using a simple interface. Data are re-engineered to facilitate easy loading. The query layer structure comprises tables, procedures, functions, triggers, views and materialised views. The structure contains a main table PRB_DATA which contains all of the data with the following attribution: • a unique identifier • the data source • the unique identifier from the parent database for traceability • the 3D location • the property type • the property value • the units • necessary qualifiers • precision information and an audit trail Data sources, property type and units are constrained by dictionaries, a key component of the structure which defines what properties and inheritance hierarchies are to be coded and also guides the process as to what and how these are extracted from the structure. Data types served by the Query Layer include site investigation derived geotechnical data, hydrogeology datasets, regional geochemistry, geophysical logs as well as lithological and borehole metadata. The size and complexity of the data sets with multiple parent structures requires a technically robust approach to keep the layer synchronised. This is achieved through Oracle procedures written in PL/SQL containing the logic required to carry out the data manipulation (inserts, updates, deletes) to keep the layer synchronised with the underlying databases either as regular scheduled jobs (weekly, monthly etc) or invoked on demand. The PropBase Query Layer's implementation has enabled rapid data discovery, visualisation and interpretation of geological data with greater ease, simplifying the parametrisation of 3D model volumes and facilitating the study of intra-unit heterogeneity.
48 CFR 1552.232-74 - Payments-simplified acquisition procedures financing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... acquisition procedures financing. 1552.232-74 Section 1552.232-74 Federal Acquisition Regulations System... Provisions and Clauses 1552.232-74 Payments—simplified acquisition procedures financing. As prescribed in... acquisition procedures financing. Payments—Simplified Acquisition Procedures Financing (JUN 2006) Simplified...
Sun, Jin; Kelbert, Anna; Egbert, G.D.
2015-01-01
Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.
Contracting Officer Workload and Contractual Terms: Theory and Evidence
2012-08-30
should be conducted in accordance with simplified acquisitions procedures and are explicitly set aside for small businesses . These awards are known as...analyze a set of California Highway Procurement auctions and find that the ex-post adaptation costs make up between 7 and 13% of the winning bid...and simply assume he is facing some set of incentives that leads him to value saving time and money on the project and on its procurement . Having him
The Boltzmann equation in the difference formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szoke, Abraham; Brooks III, Eugene D.
2015-05-06
First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.
Fuzziness In Approximate And Common-Sense Reasoning In Knowledge-Based Robotics Systems
NASA Astrophysics Data System (ADS)
Dodds, David R.
1987-10-01
Fuzzy functions, a major key to inexact reasoning, are described as they are applied to the fuzzification of robot co-ordinate systems. Linguistic-variables, a means of labelling ranges in fuzzy sets, are used as computationally pragmatic means of representing spatialization metaphors, themselves an extraordinarily rich basis for understanding concepts in orientational terms. Complex plans may be abstracted and simplified in a system which promotes conceptual planning by means of the orientational representation.
Implications of CO Bias for Ozone and Methane Lifetime in a CCM
NASA Technical Reports Server (NTRS)
Strode, Sarah; Duncan, Bryan Neal; Yegorova, Elena; Douglass, Anne
2013-01-01
A low bias in carbon monoxide compared to observations at high latitudes is a common feature of chemistry climate models. CO bias can both indicate and contribute to a bias in modeled OH and methane lifetime. This study examines possible causes of CO bias in the ACCMIP simulation of the GEOSCCM, and considers how attributing the CO bias to uncertainty in CO emissions versus biases in other constituents impacts the relationship between CO bias and methane lifetime. We use a simplified model of CO tagged by source with specified OH to quantify the sensitivity of the CO bias to changes in CO emissions or OH concentration, comparing the modeled CO to surface and MOPITT observations. The simplified model shows that decreasing OH in the northern hemisphere removes most of the global mean and inter-hemispheric bias in surface CO. We then use results from this analysis to explore how adjusting CO sources in the CCM impacts the concentrations of ozone, OH and methane. The CCM simulation also exhibits biases in ozone and water vapor compared to observations. We use a parameterized CO-OH-CH4 model that takes ozone and water vapor as inputs to the parameterization to examine whether correcting water and ozone biases can alter OH enough to remove the CO bias. Through this analysis, we aim to better quantify the relationship between CO bias and model biases in ozone concentrations and methane lifetime.
Status of the CDS Services, SIMBAD, VizieR and Aladin
NASA Astrophysics Data System (ADS)
Genova, Francoise; Allen, M. G.; Bienayme, O.; Boch, T.; Bonnarel, F.; Cambresy, L.; Derriere, S.; Dubois, P.; Fernique, P.; Landais, G.; Lesteven, S.; Loup, C.; Oberto, A.; Ochsenbein, F.; Schaaff, A.; Vollmer, B.; Wenger, M.; Louys, M.; Davoust, E.; Jasniewicz, G.
2006-12-01
Major evolutions have been implemented in the three main CDS databases in 2006. SIMBAD 4, a new version of SIMBAD developed with Java and PostgreSQL, has been released. Il is much more flexible than the previous version and offers in particular full search capabilities on all parameters. Wild card can also be used in object names, which should ease searching for a given object in the frequent case of 'fuzzy' nomenclature. New information is progressively added, in particular a set of multiwavelength magnitudes (in progress), and other information from the Dictionnary of Nomenclature such as the list of object types attached to each object name (available), or hierarchy and associations (in progress). A new version of VizieR, also in the open source PostgreSQL DBMS, has been completed, in order to simplify mirroring. The master database at CDS currently remains in the present Sybase implementation. A new simplified interface will be demonstrated, providing a more user-friendly navigation while retaining the multiple browsing capabilities. A new release of the Aladin Sky Atlas offers new capabilities, like the management of multipart FITS files and of data cubes, construction and execution of macros for processing a list of targets, and improved navigation within an image plane. This new version also allows easy and efficient manipulation of very large (>108 pixels) images, support for solar images display, and direct access to SExtractor to perform source extraction on displayed images.
NASA Astrophysics Data System (ADS)
Destefanis, Stefano; Tracino, Emanuele; Giraudo, Martina
2014-06-01
During a mission involving a spacecraft using nuclear power sources (NPS), the consequences to the population induced by an accident has to be taken into account carefully.Part of the study (led by AREVA, with TAS-I as one of the involved parties) was devoted to "Worst Case Scenario Consolidation". In particular, one of the activities carried out by TAS-I had the aim of characterizing the accidental environment (explosion on launch pad or during launch) and consolidate the requirements given as input in the study. The resulting requirements became inputs for Nuclear Power Source container design.To do so, TAS-I did first an overview of the available technical literature (mostly developed in the frame of NASA Mercury / Apollo program), to identify the key parameters to be used for analytical assessment (blast pressure wave, fragments size, speed and distribution, TNT equivalent of liquid propellant).Then, a simplified Radioss model was setup, to verify both the cards needed for blast / fragment impact analysis and the consistency between preliminary results and available technical literature (Radioss is commonly used to design mine - resistant vehicles, by simulating the effect of blasts onto structural elements, and it is used in TAS-I for several types of analysis, including land impact, water impact and fluid - structure interaction).The obtained results (albeit produced by a very simplified model) are encouraging, showing that the analytical tool and the selected key parameters represent a step in the right direction.
NASA Astrophysics Data System (ADS)
Held, Marcel Philipp; Ley, Peer-Phillip; Lachmayer, Roland
2018-02-01
High-resolution vehicle headlamps represent a future-oriented technology that increases traffic safety and driving comfort in the dark. A further development to current matrix beam headlamps are LED-based pixellight systems which enable additional lighting functions (e.g. the projection of navigation information on the road) to be activated for given driving scenarios. The image generation is based on spatial light modulators (SLM) such as digital micromirror devices (DMD), liquid crystal displays (LCD), liquid crystal on silicon (LCoS) devices or LED arrays. For DMD-, LCD- and LCoSbased headlamps, the optical system uses illumining optics to ensure a precise illumination of the corresponding SLM. LED arrays, however, have to use imaging optics to project the LED die onto an intermediate image plane and thus create the light distribution via an apposition of gapless juxtapositional LED die images. Nevertheless, the lambertian radiation characteristics complex the design of imaging optics regarding a highefficiency setup with maximum resolution and luminous flux. Simplifying the light source model and its emitting characteristics allows to determine a balanced setup between these parameters by using the Etendue and to ´ calculate the maximum possible efficacy and luminous flux for each technology in an early designing stage. Therefore, we present a calculation comparison of how simplifying the light source model can affect the Etendue ´ conservation and the setup design for two high-resolution technologies. The shown approach is evaluated and compared to simulation models to show the occurring deviation and its applicability.
Liu, Guangkun; Kaushal, Nitin; Liu, Shaozhi; ...
2016-06-24
A recently introduced one-dimensional three-orbital Hubbard model displays orbital-selective Mott phases with exotic spin arrangements such as spin block states [J. Rincón et al., Phys. Rev. Lett. 112, 106405 (2014)]. In this paper we show that the constrained-path quantum Monte Carlo (CPQMC) technique can accurately reproduce the phase diagram of this multiorbital one-dimensional model, paving the way to future CPQMC studies in systems with more challenging geometries, such as ladders and planes. The success of this approach relies on using the Hartree-Fock technique to prepare the trial states needed in CPQMC. In addition, we study a simplified version of themore » model where the pair-hopping term is neglected and the Hund coupling is restricted to its Ising component. The corresponding phase diagrams are shown to be only mildly affected by the absence of these technically difficult-to-implement terms. This is confirmed by additional density matrix renormalization group and determinant quantum Monte Carlo calculations carried out for the same simplified model, with the latter displaying only mild fermion sign problems. Lastly, we conclude that these methods are able to capture quantitatively the rich physics of the several orbital-selective Mott phases (OSMP) displayed by this model, thus enabling computational studies of the OSMP regime in higher dimensions, beyond static or dynamic mean-field approximations.« less
Climate change on the Colorado River: a method to search for robust management strategies
NASA Astrophysics Data System (ADS)
Keefe, R.; Fischbach, J. R.
2010-12-01
The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.
Varela-Lema, Leonora; Punal-Riobóo, Jeanette; Acción, Beatriz Casal; Ruano-Ravina, Alberto; García, Marisa López
2012-10-01
Horizon scanning systems need to handle a wide range of sources to identify new or emerging health technologies. The objective of this study is to develop a validated Medline bibliographic search strategy (PubMed search engine) to systematically identify new or emerging health technologies. The proposed Medline search strategy combines free text terms commonly used in article titles to denote innovation within index terms that make reference to the specific fields of interest. Efficacy was assessed by running the search over a period of 1 year (2009) and analyzing its retrieval performance (number and characteristics). For comparison purposes, all article abstracts published during 2009 in six preselected key research journals and eight high impact surgery journals were scanned. Sensitivity was defined as the proportion of relevant new or emerging technologies published in key journals that would be identified in the search strategy within the first 2 years of publication. The search yielded 6,228 abstracts of potentially new or emerging technologies. Of these, 459 were classified as new or emerging (383 truly new or emerging and 76 new indications). The scanning of 12,061 journal abstracts identified 35 relevant new or emerging technologies. Of these, twenty-nine were located within the Medline search strategy during the first 2 years of publication (sensitivity = 83 percent). The current search strategy, validated against key journals, has demonstrated to be effective for horizon scanning. Even though it can require adaptations depending on the scope of the horizon scanning system, it could serve to simplify and standardize scanning processes.
Horner, Christoph; Engelmann, Frank; Nützmann, Gunnar
2009-04-15
An ammonium contamination plume originating from sewage field management practices over several decades is affecting the water quality at the well fields of the Friedrichshagen waterworks in Berlin, Germany. Because hydraulic measures were unsuccessful due to the fixation of ammonium on the aquifer matrix by cation exchange, an in situ nitrification measure by injection of oxygen gas was chosen to protect the extraction wells. In order to assess the hydro chemical processes accompanying this in situ measure, reactive transport modelling was performed. The relevant processes are the dissolution of oxygen gas and the nitrification of ammonium which initiate secondary geochemical processes like sulphate release, acidification and hardening. The reactive transport modelling began with the deduction of a reaction network, followed by the mathematical formulation and incorporation of reactive terms into a reactive transport solver. Two model versions were set up: (1) a simplified large scale model to evaluate the long-term reaction zoning to be expected due to permanent oxygen gas injection, and (2) a verification of the monitored hydrochemistry during a first field test performed near the contamination source. The results of reactive transport modelling demonstrate that in situ injection of oxygen gas will be effective in reducing the ammonium load from the well fields, and that acidification processes near the production wells can be minimized. Finally, a line of gas injection wells extending over the whole width of the ammonium contamination plume will be constructed to protect the well fields from further ammonium load.
Analysis of friction and instability by the centre manifold theory for a non-linear sprag-slip model
NASA Astrophysics Data System (ADS)
Sinou, J.-J.; Thouverez, F.; Jezequel, L.
2003-08-01
This paper presents the research devoted to the study of instability phenomena in non-linear model with a constant brake friction coefficient. Indeed, the impact of unstable oscillations can be catastrophic. It can cause vehicle control problems and component degradation. Accordingly, complex stability analysis is required. This paper outlines stability analysis and centre manifold approach for studying instability problems. To put it more precisely, one considers brake vibrations and more specifically heavy trucks judder where the dynamic characteristics of the whole front axle assembly is concerned, even if the source of judder is located in the brake system. The modelling introduces the sprag-slip mechanism based on dynamic coupling due to buttressing. The non-linearity is expressed as a polynomial with quadratic and cubic terms. This model does not require the use of brake negative coefficient, in order to predict the instability phenomena. Finally, the centre manifold approach is used to obtain equations for the limit cycle amplitudes. The centre manifold theory allows the reduction of the number of equations of the original system in order to obtain a simplified system, without loosing the dynamics of the original system as well as the contributions of non-linear terms. The goal is the study of the stability analysis and the validation of the centre manifold approach for a complex non-linear model by comparing results obtained by solving the full system and by using the centre manifold approach. The brake friction coefficient is used as an unfolding parameter of the fundamental Hopf bifurcation point.
75 FR 81459 - Simplified Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... FEDERAL MINE SAFETY AND HEALTH REVIEW COMMISSION 29 CFR Part 2700 Simplified Proceedings AGENCY... Commission is publishing a final rule to simplify the procedures for handling certain civil penalty.... Electronic comments should state ``Comments on Simplified Proceedings'' in the subject line and be sent to...
Physics textbooks from the viewpoint of network structures
NASA Astrophysics Data System (ADS)
Králiková, Petra; Teleki, Aba
2017-01-01
We can observe self-organized networks all around us. These networks are, in general, scale invariant networks described by the Bianconi-Barabasi model. The self-organized networks (networks formed naturally when feedback acts on the system) show certain universality. These networks, in simplified models, have scale invariant distribution (Pareto distribution type I) and parameter α has value between 2 and 5. The textbooks are extremely important in the learning process and from this reason we studied physics textbook at the level of sentences and physics terms (bipartite network). The nodes represent physics terms, sentences, and pictures, tables, connected by links (by physics terms and transitional words and transitional phrases). We suppose that learning process are more robust and goes faster and easier if the physics textbook has a structure similar to structures of self-organized networks.
NASA Astrophysics Data System (ADS)
1994-03-01
This report documents a portion of the work performed on Multi-fuel Reformers for Fuel Cells Used in Transportation. One objective of this program is to develop advanced fuel processing systems to reform methanol, ethanol, natural gas, and other hydrocarbons into hydrogen for use in transportation fuel cell systems, while a second objective is to develop better systems for on-board hydrogen storage. This report examines techniques and technology available for storage of pure hydrogen on board a vehicle as pure hydrogen of hydrides. The report focuses separately on near and far-term technologies, with particular emphasis on the former. Development of lighter, more compact near-term storage systems is recommended to enhance competitiveness and simplify fuel cell design. The far-term storage technologies require substantial applied research in order to become serious contenders.
Statistical ensembles for money and debt
NASA Astrophysics Data System (ADS)
Viaggiu, Stefano; Lionetto, Andrea; Bargigli, Leonardo; Longo, Michele
2012-10-01
We build a statistical ensemble representation of two economic models describing respectively, in simplified terms, a payment system and a credit market. To this purpose we adopt the Boltzmann-Gibbs distribution where the role of the Hamiltonian is taken by the total money supply (i.e. including money created from debt) of a set of interacting economic agents. As a result, we can read the main thermodynamic quantities in terms of monetary ones. In particular, we define for the credit market model a work term which is related to the impact of monetary policy on credit creation. Furthermore, with our formalism we recover and extend some results concerning the temperature of an economic system, previously presented in the literature by considering only the monetary base as a conserved quantity. Finally, we study the statistical ensemble for the Pareto distribution.
49 CFR 1111.9 - Procedural schedule in cases using simplified standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) SURFACE TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE COMPLAINT AND INVESTIGATION... the simplified standards: (1) In cases relying upon the Simplified-SAC methodology: Day 0—Complaint... dominance. (b) Defendant's second disclosure. In cases using the Simplified-SAC methodology, the defendant...
2 CFR 200.88 - Simplified acquisition threshold.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Simplified acquisition threshold. 200.88... acquisition threshold. Simplified acquisition threshold means the dollar amount below which a non-Federal... threshold. The simplified acquisition threshold is set by the Federal Acquisition Regulation at 48 CFR...
Code of Federal Regulations, 2010 CFR
2010-10-01
... for contracts not to exceed the simplified acquisition threshold. 836.602-5 Section 836.602-5 Federal... contracts not to exceed the simplified acquisition threshold. Either of the procedures provided in FAR 36... simplified acquisition threshold. ...
77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...
77 FR 54482 - Allocation of Costs Under the Simplified Methods
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...
An equivalent n-source for WGPu derived from a spectrum-shifted PuBe source
NASA Astrophysics Data System (ADS)
Ghita, Gabriel; Sjoden, Glenn; Baciak, James; Walker, Scotty; Cornelison, Spring
2008-04-01
We have designed, built, and laboratory-tested a unique shield design that transforms the complex neutron spectrum from PuBe source neutrons, generated at high energies, to nearly exactly the neutron signature leaking from a significant spherical mass of weapons grade plutonium (WGPu). This equivalent "X-material shield assembly" (Patent Pending) enables the harder PuBe source spectrum (average energy of 4.61 MeV) from a small encapsulated standard 1-Ci PuBe source to be transformed, through interactions in the shield, so that leakage neutrons are shifted in energy and yield to become a close reproduction of the neutron spectrum leaking from a large subcritical mass of WGPu metal (mean energy 2.11 MeV). The utility of this shielded PuBe surrogate for WGPu is clear, since it directly enables detector field testing without the expense and risk of handling large amounts of Special Nuclear Materials (SNM) as WGPu. Also, conventional sources using Cf-252, which is difficult to produce, and decays with a 2.7 year half life, could be replaced by this shielded PuBe technology in order to simplify operational use, since a sealed PuBe source relies on Pu-239 (T½=24,110 y), and remains viable for more than hundreds of years.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe
2007-01-01
FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.
Piloting Changes to Changing Aircraft Dynamics: What Do Pilots Need to Know?
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.
2011-01-01
An experiment was conducted to quantify the effects of changing dynamics on a subject s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. The data will be used to identify primary aircraft dynamics variables that influence changes in pilot s response and produce a simplified pilot model that incorporates this relationship. Each run incorporated a different set of second-order aircraft dynamics representing short period transfer function pitch attitude response: damping ratio, frequency, gain, zero location, and time delay. The subject s ability to conduct the tracking task was the greatest source of root mean square error tracking variability. As for the aircraft dynamics, the factors that affected the subjects ability to conduct the tracking were the time delay, frequency, and zero location. In addition to creating a simplified pilot model, the results of the experiment can be utilized in an advisory capacity. A situation awareness/prediction aid based on the pilot behavior and aircraft dynamics may help tailor pilot s inputs more quickly so that PIO or an upset condition can be avoided.
Simplified Phase Diversity algorithm based on a first-order Taylor expansion.
Zhang, Dong; Zhang, Xiaobin; Xu, Shuyan; Liu, Nannan; Zhao, Luoxin
2016-10-01
We present a simplified solution to phase diversity when the observed object is a point source. It utilizes an iterative linearization of the point spread function (PSF) at two or more diverse planes by first-order Taylor expansion to reconstruct the initial wavefront. To enhance the influence of the PSF in the defocal plane which is usually very dim compared to that in the focal plane, we build a new model with the Tikhonov regularization function. The new model cannot only increase the computational speed, but also reduce the influence of the noise. By using the PSFs obtained from Zemax, we reconstruct the wavefront of the Hubble Space Telescope (HST) at the edge of the field of view (FOV) when the telescope is in either the nominal state or the misaligned state. We also set up an experiment, which consists of an imaging system and a deformable mirror, to validate the correctness of the presented model. The result shows that the new model can improve the computational speed with high wavefront detection accuracy.